Maslow’s well-known pyramid-shaped hierarchical diagram, with our basic needs at its foundation and self-actualization at its summit, was more than just a collection of labels; it was a comprehensive roadmap for a healthy, well-balanced life. Checks included keeping a proper pH balance, being active, building friendships, and finding a sweetheart. He took the homeostasis of the body—its evolved ability to crave food when undernourished, for example—and applied it to the whole of life in an attempt to keep the world healthy in body and in mind … and to free up the psychiatrist’s couch. BS
1944
Game Theory
J. von Neumann and O. Morganstern
An applied mathematical theory used to evaluate competitive situations
A branch of applied mathematics, game theory explores situations—called games—in which competing players have to make decisions knowing that the outcome of the game is also dependent upon the choices the others make. Such games require you to think about not only what is best for you, but also about what the other players are likely to do. Game theory studies such interactions in order to determine what choices each player should make.
In 1944, mathematician John von Neumann (1903–57) and economist Oskar Morgenstern (1902–77) published their book Theory of Games and Economic Behavior, in which they laid the foundations for modern game theory. Von Neumann and Morgenstern focused mostly on “zero-sum” games: situations in which one player can win and another can lose but, when both outcomes are added together, no net positive is achieved. Other researchers, notably mathematician John Nash (b. 1928), developed the theory further, applying it to scenarios in which multiple players compete for shared goals, in which some players know more or less than others, or in which games take place over a series of steps or stages.
“In terms of the game theory … the universe is so constituted as to maximize play.”
George B. Leonard, writer and educator
Though its name sometimes leads people to believe it is trivial or only useful for entertainment, game theory reaches across spectrums, influencing any scenario where people are engaged in competition, such as law, economics, politics, or even warfare. Game theory is abstract and often simplified in how it presents complex human behaviors, yet even in its simplified form it allows researchers the opportunity to analyze human activity with precision. MT
1944
Hell Is Other People
Jean-Paul Sartre
Eternal torment comes from damaging relationships between people
In 1944, French existential philosopher and writer Jean-Paul Sartre (1905–80) wrote his play Huis Clos, often translated as “No Exit.” In it, Sartre tells the story of three deceased people who must spend the afterlife confined in a room together. The relationships that develop between the three prove to be so acrimonious that one character concludes the play by saying that “hell is other people.”
For Sartre, an atheist and existentialist, the promise of spending an afterlife exulted in heaven or condemned in hell was a fiction. Nevertheless, he wrote No Exit to describe what he saw as hell: a world where people must spend eternity with those they cannot get along with. True hell, for Sartre, is the pain and suffering endured from having to spend a life in bad relationships.
“There’s no need for red-hot pokers. Hell is—other people!”
Jean-Paul Sartre, Huis Clos (1944)
The despair and bleakness present in No Exit led many to believe that Sartre intended the statement “hell is other people” as a descriptor of the realities of human existence in general, and that the only way to achieve true happiness was to live in isolation. Sartre himself later clarified his intention, saying that if people are in a spoiled or damaged relationship, their existence is hell, but that does not mean that all relationships are by nature hellish. To anyone concerned with their social relationships and the pursuit of a fulfilling or happy life, the notion that hell is other people is, at the very least, troubling. The idea forces us to ask questions about ourselves, about those with whom we share our lives, about how we affect one another, and about what, if anything, we can do to find happiness. MT
1944
The Road to Serfdom
Friedrich von Hayek
A warning that state-controlled economies always lead to a loss of personal freedom
Friedrich von Hayek with a class of students at the London School of Economics in 1948.
British economist Friedrich von Hayek’s (1899–1992) book The Road to Serfdom (1944) was an ode to Liberalism: the ideology of freedom, equality, and small, noninterventionist governments. At the core of the work is a warning that any government that creates and fosters a planned, state-controlled economy will always gravitate to totalitarianism. For Hayek, economic plans always involved social objectives—but whose objectives? In such a regulated system, it is easy for special interests to be favored, but are these interests for the social good? However they are packaged, he warned, big, centrally controlled governments inevitably lead to a loss of freedom—and serfdom.
Hayek never expected his book to make much of an impact, but nonetheless felt writing it “a duty that I must not evade.” After struggling to find a publisher, the initial run of 2,000 copies sold in days. Another 30,000 followed, and when Reader’s Digest serialized it in 1945 the book began to sell in millions. It went on to become recognized as one of the twentieth century’s most expressive accounts of the nature of market libertarianism, and the threats that it faces.
“In my opinion it is a grand book … I [am] … in agreement with virtually the whole of it.”
John Maynard Keynes, economist
Hayek felt that planning and intervention stifled spontaneous responses and initiatives by individuals to achieve economic outcomes, and believed that market forces alone could provide a populace with all of its needs. The free market, he claimed, would always win out over a socially planned economy because “it is the only method by which our activities can be adjusted … without the coercive or arbitrary intervention of authority.” BS
1945
United Nations
United States
An organization of world governments formed to prevent worldwide war
U.S. president Harry Truman speaking from the podium at the signing of the United Nations Charter in 1945.
Founded in 1945 on the initial instigation of the United States, the United Nations is an international organization of member nations that serves to promote world peace, security, international cooperation, and economic development. As of 2011, when the Republic of South Sudan joined, there were 193 member states.
U.S. President Franklin D. Roosevelt (1882–1945)is credited with coining the name “United Nations,” although the idea arguably first arose with German philosopher Immanuel Kant, who, in 1795, proposed an international organization allowing nations a way to promote peace and cooperate to expand commerce. The United Nations was intended to replace the League of Nations, the international organization formed in the wake of World War I (1914–18). Like its predecessor, the United Nations was created to give countries a forum for resolving disagreements in order to avoid potential conflict. By 1939, after the obvious failure of the League of Nations in preventing a new war, talks had begun among the Allied nations to create a new international organization. On October 24, 1945, a majority of the original fifty signatory nations adopted the United Nations charter, and so it came into being.
“The United Nations system is still the best instrument for making the world less fragile.”
Carlo Azeglio Ciampi, politician
While the League of Nations failed to prevent global conflict, the United Nations has, arguably, succeeded. Wars have not stopped since its formation, but neither has there been a worldwide conflict since 1945. And although coincidence does not a cause make, the United Nations has come to serve many additional purposes, and has maintained its place as the largest international organization of nations. MT
1945
Passive-aggressive Behavior
William Menninger
The identification of a subtle form of below the surface aggression
While a colonel in the U.S. armed forces during World War II (1939–45), psychiatrist William Menninger (1899–1996) began to notice a pattern of behavior among enlisted men. They followed orders, but did so with a benign air bordering on disobedience. Menninger referred to this behavior as “below the surface hostility,” and he gave it a label: passive-aggression.
There are now five commonly accepted levels of passive-aggressive behavior. The first, temporary compliance, is when someone complies with a request but delays acting on it. The second, intentional inefficiency, is doing something when asked but doing it in an unacceptable fashion. The third, escalation, is when a problem is allowed to escalate and pleasure is taken from the angst that follows. The fourth, hidden but conscious revenge, occurs when someone decides to “get back” at someone whom they feel has slighted them by, for example, stealing from them or damaging their property. The fifth level, self-depreciation, is where the passive-aggressive person goes to destructive lengths in the pursuit of vengeance.
“Denying feelings of anger is classic passive-aggressive behavior.”
Psychology Today (November 23, 2010)
Passivity and aggression are at opposite poles of our ability to communicate. The passive person allows their rights to be violated and does not speak out for fear of what might follow, while the aggressive person speaks out with no regard for the consequences or effect on others. Passive-aggressive behavior, however, does not alternate between the passive and the aggressive. It is a mix of the two, a combination of negativistic attitudes and a pattern of intentional inefficiencies that usually begin exhibiting in late adolescence. BS
1946
The Intentional Fallacy
William Wimsatt and Monroe Beardsley
Knowing an artist’s intent is irrelevant when considering an artistic work
In literary criticism it is not necessary or desirable to understand what an author intended, according to the intentional fallacy postulated by English critic William K. Wimsatt, Jr. (1907–75) and U.S. philosopher of art Monroe Beardsley (1915–85). Even if it were possible to understand what a writer, or any artist, intended when creating a work, the intentional fallacy disregards such intent and insists that focusing on what the work is, and what it accomplishes, is the best way to evaluate it.
Wimsatt and Beardsley published their paper “The Intentional Fallacy” in 1946. They rejected the view that if an artist creates something with a clear intention or meaning in mind, then the art necessarily reflects that meaning. Their position on criticism also opposed the neo-romantic idea that knowing an artist’s personal motivation is paramount to understanding a work. Understanding what an artist wanted to do, and whether the work accomplished that desire, was seen as irrelevant. Along with other critics who favored evaluating a work without considering outside factors, the authors of the intentional fallacy established a key element of what became known as New Criticism.
“Critical inquiries are not settled by consulting the Oracle.”
William K. Wimsatt, Jr. and Monroe Beardsley
Although New Criticism and the intentional fallacy have faded in importance since the 1960s, they changed the way people view poetry, literature, and art. Looking at the Mona Lisa (1503–06), does it matter what Leonardo da Vinci intended? Reading the epic poem Beowulf (c. 850), does ignorance of the author make it less important or exciting? The intentional fallacy says “no,” and states that we can, and should, measure art’s meaning without the artist having to whisper in our ear. MT
1946
Child-centered Child Rearing
Dr. Benjamin Spock
A revolutionary approach to child care that has continued to this day
Dr. Benjamin Spock looks amused by the antics of two young patients in 1946.
Fluctuations in approaches to child rearing have been commonplace ever since the late 1700s, which is hardly surprising because raising children anywhere has always been governed by the cultural and historical norms of the time. The Great Depression was an era of survival-centered families, as parents fighting poverty and even malnutrition rightly worried for the wellbeing of themselves and their children. As the Depression eased, families became more parent-centered and children were seen as “self-regulating.” Then, in the late 1940s, came a revolution in child-rearing—the publication of the Freudian pediatrician Dr. Benjamin Spock’s (1903–98) bestselling book, The Common Sense Book of Baby and Child Care (1946). Child rearing in the United States would never be the same again.
“We believe that young children should be in bed by seven …”
Dr. Benjamin Spock, Baby and Child Care (1946)
One of the most influential pediatricians of all time, Spock was the first to use psychoanalysis to try and comprehend the needs of not only the child, but its parents, too. He wanted parents to be more affectionate, more considerate, to treat their children as people. Gone was the rigid, authoritarian approach to feeding and toilet training, the dearth of touching and outward displays of familial love. “Spock’s Generation,” as these children were sometimes called, would be kissed and cuddled, and encouraged to sit on their parents’ laps. Critics, however, say he helped grow a generation of “molly-coddled” adolescents.
Nevertheless, by the time of his death in 1998 Baby and Child Care had sold more than fifty million copies and been translated into thirty-eight languages. Validation enough for his refreshing approach. BS
1947
Pop Art
Eduardo Paolozzi
An art style that drew on elements of popular culture and reinterpreted them
Eduardo Paolozzi’s I Was a Rich Man’s Plaything (1947), the opening salvo of Pop art.
Pop art developed in Britain and the United States in the mid-to late 1950s, but its first example was created in 1947. I Was a Rich Man’s Plaything by printmaker Eduardo Paolozzi (1924–2005) was a collage of commercially printed papers mounted on a card support. It incorporated a cover of the magazine Intimate Confessions featuring a scantily-clad young woman, along with a World War II bomber, an advertisement for Coca-Cola, other small images, and—significantly—the word “Pop,” torn from the pages of a comic book. The collage would become the movement’s standard-bearer, even though the term “Pop art” would not be used until coined by the art critic Lawrence Alloway in 1954.
Pop art drew on the art that had grown up around everyone, intrinsic in everyday items such as movie posters, household appliances, advertising billboards, comic books, science fiction, and food packaging—all reflecting a growing economic prosperity. Pop art would blur the distinction between “high” art and the “low” art of popular culture.
“[Paolozzi] cut and pasted the roots to what would become a … movement.”
Kyuhee Baik, “The Creators’ Project” website
After Paolozzi, the precursor of this avant-garde approach was The Independent Group, a British association of young artists and painters who first met as a group in 1952, determined to challenge prevailing modernist interpretations of fine art and discover the aesthetic beauty of the everyday “found object.”
Examples of Pop art include images of Marilyn Monroe and Campbell’s soup cans made by Andy Warhol (1928–87) in the 1960s. His repetition of common images lifted them into the realm of art, or, at least, into that of the coolly ambivalent and very curious. BS
1947
Counterfactuals
Nelson Goodman
An explanation of causation expressed in terms of a conditional statement
A counterfactual is a statement of the form, “If P were true, then Q would be true,” where P is actually false. U.S. philosopher Nelson Goodman (1906–98) launched the contemporary discussion of counterfactuals in an article in Journal of Philosophy, published in 1947.
In Goodman’s seminal article, he noted that while counterfactuals are crucially important to reasoning in science, there was no account of how these sorts of statements could be true or false. The central problem was that counterfactuals made claims about the way things would be and not the way things actually were. Goodman proposed that a counterfactual was true only in the way that it was backed by the laws of nature that held in the actual world. While Goodman’s account was highly influential, it was eventually eclipsed in popularity by the highly technical “possible-world semantics” for counterfactuals of U.S. philosopher David Lewis (1941–2001), according to which the truth of counterfactuals was determined by what happened in universes other than the actual one.
“There is a possible world in which kangaroos have no tails …”
David Lewis, On the Plurality of Worlds (1986)
Counterfactuals have been studied extensively. Psychologists have investigated the role that counterfactuals play in reasoning and learning; economists, political scientists, and historians have proposed various methods for determining what “would have happened” on various counterfactual suppositions. In quantum computing, researchers have hypothesized that the output of a particular computer program could be determined by measuring the state of a quantum computer, even though the computer has not actually run the program. BSh
1947
Critique of Everyday Life
Henri Lefebvre
A radical critique of modern life from a neo-Marxist viewpoint
Henri Lefebvre (1901–99) was one of the last great classical philosophers, and there was barely an aspect of everyday life that he did not either write about or comment on, although his work has been sadly neglected in the English-speaking world. A Marxist intellectual and renowned sociologist, he wrote Critique of Everyday Life (a work in three volumes, published in 1947, 1961, and 1981) as a scathing assault on modernism and all of the trivialities it produced, and of the entwined way in which these trivialities operate together to oppress and alienate. The trivial, he said, should not be beyond the scrutiny of the philosopher.
Lefebvre argued that what modernity gives us in terms of technology and rational understanding, it strips away from us in our reduced ability to socialize. Work and leisure held no distinction either—both were regimented and lacking in spontaneity.
“The more needs a human being has, the more he exists.”
Henri Lefebvre, Critique of Everyday Life (1947)
Publishers Weekly called the book a “savage critique of our consumerist society,” an attack on modernism that had nothing to do with a longing for the quaintness of some hypothetical “golden age,” but was instead a rational clarion call to rebellion. “Man must be everyday, or he will not be at all,” Lefebvre said in Critique’s opening pages. He must make himself free of alienation if he is to be the “total man,” and the only way to do this is to rebel from the sameness of everyday life. He had no time for poets, or for most philosophers, too, both of whom he believed spent their time wavering between the familiar and the trivial. Art was not the answer, either. Action was what was needed, not poetry and paintings. BS
1947
ISO
Switzerland
An organization established to standardize the world’s goods and services
The International Organization for Standardization (ISO) was created in 1947 to facilitate the standardizing of the world’s goods and services; it is headquartered in Geneva, Switzerland. A nongovernmental, voluntary organization comprised of more than 2,500 technical committees, it aims to promote global communication and collaboration, the development of worldwide standards in business and finance, and the growth of global trade. Draft standards are negotiated and, when developed, are shared with the ISO membership, who then comment and vote on them. Once a consensus is reached, the draft becomes an ISO standard. The ISO’s three official languages are English, Russian, and French, and the standards it promulgates are intended to be applied worldwide by importers and exporters in both the public and private sectors, and throughout all stages of any product’s design, engineering, manufacturing, and testing.
The ISO was initially founded to standardize the discipline of mechanical engineering in 1926, but the advent of World War II in 1939 forced it to be temporarily disbanded in 1942. It was reorganized in 1946 after a meeting of representatives from twenty-five nations at London’s Institute of Civil Engineers; the new organization officially began its work in February 1947, working out of a private residence on Geneva’s Route de Malagnou. Some 164 countries now have representatives within the ISO.
The ISO is a publishing juggernaut, producing technical reports and specifications, guidebooks, and amendments to existing standards. So far the ISO has published more than 19,000 international standards. ISO standards ensure that goods and services are well made, safe, and reliable. The ISO does not itself decide what should be standardized, but instead responds to requests from industry and consumer groups. BS
1947
Alpha Male / Female
Rudolph Schenkel
A theory on human social behavior based on a flawed theory on wolves, regarding the hierarchical structure of a community
A lower ranked wolf is reprimanded by an alpha of its pack. Most wolves who lead packs achieve their position simply by mating and producing pups, which then become their pack.
The concept of the alpha male and alpha female—that in every human social group, and also in some animal groups, there will emerge a dominant male or female—was first suggested by the Swiss animal behavioralist Rudolph Schenkel (1914–2003), in his well-known paper “Expression Studies on Wolves” (1947).
“Attempting to apply information about the behavior of assemblages of unrelated captive wolves to the familial structure of natural packs has resulted in considerable confusion.”
L. David Mech, wildlife research biologist
In the 1930s and 1940s, Schenkel observed what he believed to be purposefully aggressive behavior among wolves as they appeared to vie for dominance of their packs. There was only one problem, and it was a major one: Schenkel’s subjects were all wolves held captive in zoos, and their behavior was not the same as that of wolves in the wild. Forcing wolves unrelated by family to cohabit a restricted space creates unnatural tensions and causes them to behave differently to familial packs in the wild. Nevertheless, the terms “alpha male” and “alpha female” took root, and in time they were applied to other animal species including, eventually and most commonly, humans. But, as U.S. wolf expert L. David Mech (b. 1937) pointed out, the theory had a fundamental flaw: studying wolf behavior in captivity and drawing generalized conclusions was analogous to basing our understanding of standard human behavior on what happens in refugee camps.
The term “alpha” was always a misleading one. Schenkel’s alpha wolves did not possess the traits that we have come to expect them to have: they did not always lead the hunt, were not always the first to gorge on a kill, and were not always the largest. They were alphas through deference as much as through dominance, and it was not unusual for them to assume, and then lose, their alpha status.
And for all those controlling, dominating human beings to whom the term has been applied? Perhaps much the same kind of fleeting, transient authority should have been ascribed to them, too. BS
c. 1948
The Kinsey Reports
Dr. Alfred Kinsey and others
Revolutionary ideas on human male and female sexuality and sexual behavior
Three women react in exaggerated shock to a review of Alfred Kinsey’s report on women in 1953.
The Kinsey Reports were two studies by U.S. biologist and sexologist Dr. Alfred Kinsey (1894–1956) and fellow researchers that, based on 11,000 interviews, explored human sexuality and significantly changed established views of it in society at large. First came Sexual Behavior in the Human Male, in 1948, followed five years later by Sexual Behavior in the Human Female in 1953. Kinsey’s findings placed human sexuality on the agenda as a legitimate object of study. Kinsey’s object was to separate research on sex from moral judgment, at a time when sexuality was commonly discussed in terms of the “dangers” of masturbation, the “perversion” of any sexual activity conducted outside of the assumed norm, and even of sexually transmitted diseases being a “punishment” for sexual “wrongdoing.”
“The only unnatural sex act is that which you cannot perform.”
Dr. Alfred Kinsey
Such was society’s refusal to discuss sexual matters at the time that many of Kinsey’s findings seemed astounding: that women, too, were sexual beings, that masturbation and premarital intercourse were commonplace, and that more than a third of the interviewed adult males had had homosexual experiences at some point in their lives. The Kinsey Reports were particularly groundbreaking in their investigations of female sexuality and homosexuality, as discussion of both was still generally considered taboo. To a large degree, Western society’s frankness in expressing sexual mores, its ability openly to discuss women’s sexuality, masturbation, and homosexuality, and its readiness to distance research on sexuality from traditional Judeo-Christian moral biases, followed from widespread acceptance of Kinsey’s research. JE
1948
Feynman Diagram
Richard Feynman
A graphical representation of the interactions between subatomic particles
Feynman diagrams, named after their inventor, U.S. Nobel Laureate physicist Richard Feynman (1918–88), are visual depictions of how subatomic particles interact. They are composed of different types of lines, such as straight, wavy, solid, dotted, or looping, and drawn on charts of two axes, in which the vertical axis represents time and the horizontal one represents space.
Subatomic particles are notoriously difficult to observe, and equally hard to visualize. In order to describe accurately these incredibly small, fast, and invisible particles, theoretical physicists once had to work solely with formulas, relying on their understanding of mathematics to summarize the often very strange behaviors that occur at the quantum level. Then, in 1948, Feynman developed a method to translate the mathematical data into simple, visual depictions. With the aid of the diagrams, theoretical physicists could not only describe and explain what they were doing, but they also found the reverse to be true, that the diagrams aided in performing new calculations and opened up unfamiliar areas of research.
“You can’t say A is made of B or vice versa. All mass is interaction.”
Richard Feynman
Originally devised by Feynman for his studies in quantum electrodynamics, the diagrams were soon found to be valuable in nuclear, particle, gravitational, and solid-state physics, too. Even learned physicists steeped in mathematics discovered that working with simple diagrams allowed them to better visualize scenarios that had once been purely mathematical. More than that, they now had a powerful new calculation tool: the Feynman diagram was a new light with which to investigate the darkness. MT
1948
Cybernetics
Norbert Wiener
Using feedback from a system to improve its ability to achieve its intended outcome
Norbert Wiener, founder of cybernetics, in a Massachusetts Institute of Technology classroom in 1949. Wiener was a child prodigy who earned his first degree at the age of fourteen.
Named from the Greek word kybernetes, meaning “to steer” or “to navigate,” the mathematical world of cybernetics developed at a time when huge strides were being made in computing and telecommunications. The theory was developed gradually throughout the 1940s by mathematicians such as Julian Bigelow (1913–2003), Alan Turing (1912–54) (the “father of artificial intelligence”), and notably Norbert Wiener (1894–1964), who published his book Cybernetics, or Control and Communication in the Animal and the Machine in 1948. In the book, Wiener defined cybernetics as the study of the structures and possibilities inherent in a system that exists within its own “closed loop,” where any act within the system affects its own environment and produces information that allows the system to alter or modify its behavior. In other words, cybernetics is about systems that produce feedback, and how that feedback is used to improve the ability of that system to achieve its goal.
“ … the social system is an organization like the individual, that is bound together by a system of communication … in which circular processes of a feedback nature play an important part.”
Norbert Wiener, Cybernetics (1948)
For example, imagine someone trying to steer a boat along a given course. The wind and tide will both act on the boat with the effect of driving it off course. However, by looking where he or she is going and using this feedback to steer the boat either left or right, the person steering is able to keep the boat (or system) on course.
The term “system” can refer to biological systems in humans and animals, and also to technological systems and systems such as the environment or the economy. Cybernetics touches on numerous academic disciplines, including electrical engineering, mathematics, biology, neurophysiology, anthropology, and psychology, and has been applied to fields such as business management and organizational learning. Although cybernetics is often thought to be linked to Artificial Intelligence, the two are distinct from one another. BS
1948
Chemotherapy
Sidney Farber
The development of a new generation of drugs designed to fight cancer
Sidney Farber with a young patient in 1960. Thanks to Farber’s pioneering discovery, acute leukemia is today one of the most successfully treated nonsolid cancers in children.
A relatively unknown U.S. pediatric pathologist, Sidney Farber (1903–73), founded the Children’s Cancer Research Foundation at Boston’s Children’s Hospital in 1947 and became its first full-time pathologist. Farber was a part of the postwar boom in medical research, and was convinced that the lack of sustained research was the only thing standing between cancers and their inevitable cure. Farber was at the same time exasperated by the seeming powerlessness of modern medicine when faced with cancer, and very much wanted to offer children and adults suffering from leukemia something more effective and lasting than a brief, cortisone-induced respite from an inevitable, and very painful, death.
“Dr. Farber would say that, in cancer, the child is the father to the man. Progress in cancer research at the clinical level almost always occurs in pediatrics first.”
Dr. Emil Frei III, oncologist
Farber had been studying how folic acid accelerated the growth of bone marrow, and reasoned that if a drug capable of blocking folic acid could be synthesized, it might prevent the production of abnormal marrow common to leukemia. In 1947, he took the immunosuppressive drug aminopterin, which blocked chemical reactions necessary for DNA replication, and tested it on a group of sixteen young leukemia patients. Ten of the children entered temporary remission because the folic acid antagonists Farber used had inhibited the growth of their acute lymphoblastic leukemia cells. Farber published the results of his tests in the June 1948 issue of the New England Journal of Medicine.
His results were initially met with incredulity; it was, after all, the first time in medical history that nonsolid tumors, such as blood, had ever responded to drugs. Farber—a previously unheralded pathologist working from a basement laboratory—had effectively found a way to suppress the proliferation of cancer cells by denying them the ability to divide. In so doing, he had ushered in a new era of cancer chemotherapy, characterized by drug-induced remission. BS
1948
Information Theory
Claude Shannon
A mathematical theory for processing and communicating information
In 1948, U.S. mathematician and electrical engineer Claude Shannon (1916–2001) published an article titled “A Mathematical Theory of Communication” in the Bell System Technical Journal. In the paper, Shannon set out the fundamental problem of communication: to reproduce, at one point in space, a message that has been created at another. He described what he saw as the basic components of communication: the presence of an information source that creates a message; a transmitter that takes the message and transforms it into a signal; a channel to carry the signal; a receiver to take the signal and convert it into the message; and a recipient—either human or machine—for which the message is intended. The paper also established an entirely new idea—that digital or binary information could be treated like a measurable physical quantity, such as density or mass.
“ … semantic aspects of communication are irrelevant to the engineering problem.”
Claude Shannon, “A Mathematical Theory …” (1948)
Known today as the “father of information theory,” Shannon developed his theory while working for Bell Telephone Laboratories on the problem of how to most efficiently transmit information. “A Mathematical Theory of Communication” would prove to be a seminal piece of work, presenting a mathematical model for encoding information by attributing to it a value—either zero or one—and along the way proving that mathematics could be a tool to calculate the amount of information that a system could carry. To this day the paper provides guiding principles in the ongoing search for ever faster and more efficient communications systems, all the way from the Internet to Apple’s iPod. BS
1948
Binary Code
Claude Shannon
The conversion of information to strings of zeros and ones
Binary code lies at the heart of computer and telecommunication systems.
Among the millions of ways in which human beings have found to communicate, the use of zeros and ones might be the farthest abstracted from lived experience. However, computers would still be languishing in the prehistory of information technology had not binary code—series of zeros and ones representing communicative symbols or computer functions—come into its own.
Binary numbers can be found in ancient Vedic manuscripts, where they were used as a memory aid. However, it was not until German mathematician Gottfied Leibniz (1646–1716), father of modern-day calculus, developed a system of logic that represented verbal utterances in mathematical code that the binary system was used for more complex applications.
“ … one can construct a fairly good coding of the message on a 0, 1 channel …”
Claude Shannon, “A Mathematical Theory …” (1948)
In 1948, building on English mathematician George Boole’s (1815–64) algebraic system of Boolean logic, U.S. mathematican Claude Shannon (1916–2001) seized on binary code as the foundation for his groundbreaking paper “A Mathematical Theory of Communication.” Information, he argued, can be reduced to the total of ones and zeros it takes to communicate it via machine. (Zeros and ones were selected because these two numbers served to express the flow and stoppage of electricity through a transistor: “one” means the transistor is on; “zero” means it is off.) Shannon’s work was revelatory and formed the basis for the communication devices used in the twenty-first century. The theory makes it possible for communication devices to stream data and store vast amounts of visual and audio communication. MK
1948
Self-fulfilling Prophecy
Robert K. Merton
A prediction that directly or indirectly causes itself to become true
The eminent U.S. sociologist Robert K. Merton (1910–2003) explained the term “self-fulfilling prophecy” (SFP) in 1948 as “a false definition of a situation evoking a new behavior which makes the original false conception come true.” In other words, believing something strongly enough will make it come true because you are consciously and subconsciously acting in ways that will encourage it to happen. SFPs are not mere perceptions flowing from a belief; there must also be measurable consequences that conform to that belief.
The term has its roots in the Thomas theorem, devised in 1928 by another U.S. sociologist, William I. Thomas (1863–1947), who was of the opinion that, “If men define situations as real, they are real in their consequences.” Even William Shakespeare was aware of the powerful effects of belief: “There is nothing either good or bad,” he wrote, “But thinking makes it so.”
“The Christian resolution to find the world ugly and bad has made [it] ugly and bad.”
Friedrich Nietzsche, philosopher
One of Merton’s best-known examples of a self-fulfilling prophecy is his “run on a bank” theory; that if a few people believe their bank is about to run out of money, and if word gets around to that effect, in no time at all crowds will converge on the bank, withdraw their money, and the bank will be left with no cash when otherwise it would have remained solvent.
SFP has become an influential component of cognitive and self-perception theories, but perhaps more significant is that it is now such a part of our everyday collective consciousness. For a social science term to enter the general lexicon is not without precedent, but for something called a “sociological neologism” to do it? That is almost unheard of. BS
1948
Holography
Dennis Gabor
The manipulation of light to create three-dimensional images
The world that we see around us is three-dimensional, but for most of history humans have been unable to realistically replicate that effect on a flat surface. Through the art of holography, the technique of making a three-dimensional image called a hologram, it became possible to bring a flat image to life.
The idea originated with the Hungarian-born scientist Dennis Gabor, who in 1948 came up with his theory of holography while working to improve the resolution of an electron microscope. Gabor’s theory, for which he was awarded the Nobel Prize for Physics in 1971, centered on the interference patterns of light waves (created when two waves interact with each other). He posited that the crests and troughs of the light waves emitted from a source contain all the information of that source, and that if the wave pattern could be frozen for an instant and photographed then the pattern could be reconstructed to create an image with the same three-dimensional characteristics as the original source. Gabor had some success with putting his theory into practice, but light sources at the time were not sufficient to produce satisfactory results.
The invention of the low-cost, solid-state laser in the early 1960s changed this, however, as it provided a more coherent light source. A key breakthrough came in 1968 with the work of Stephen A. Benton, whose white-light transmission holography could be viewed in ordinary white light. Holograms made using this technique created a three-dimensional “rainbow” image from the seven colors that make up white light. Moreover, this form of hologram was able to be mass produced.
Holograms have since found a myriad of practical applications, from numerous security-related uses—such as the embossed holograms on credit cards—to employment in fields as diverse as computer technology, medicine, and art. BS
1948
Universal Declaration of Human Rights
United Nations Member States
A document composed of thirty articles, which provided the first global expression of rights to which all human beings are inherently entitled
A man studies a copy of the Universal Declaration of Human Rights, one of the first documents published by the United Nations.
The Universal Declaration of Human Rights was adopted by the General Assembly of the United Nations (UN) on December 10, 1948. It was born out of a feeling that the existing UN Charter, laid out in 1945 following the horrors and atrocities of World War II, was insufficient in making plain the rights of people. Thus, a new document categorically stating the inalienable rights of every human being regardless of their status, color, or religion—rights to which we are all inherently entitled—was deemed necessary. It nevertheless maintained the principle of the original UN Charter, which “reaffirmed faith in fundamental human rights, and the dignity and worth of the human person.”
“All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act toward one another in a spirit of brotherhood.”
Article 1, Universal Declaration of Human Rights (1948)
The declaration’s principal drafter was a Canadian, John Peter Humphrey (1905–80), a jurist, scholar, and long-time human rights activist. In 1946 Humphrey was appointed as the first director of the United Nations Division of Human Rights and he was also given the task of gathering and examining the documents that would inform and guide the commission in formulating the declaration. The former United States First Lady Eleanor Roosevelt (1884–1962) was appointed the commission’s chairperson, and also played a prominent role in the drafting process.
The declaration was, however, in essence, merely a political document, not legally binding despite there being a vigorous argument on the subject. It has also been criticized for being decidedly pro-Western due to the hegemonic presence of the United States (U.S. delegates to the commission made it clear that they themselves were not in favor of its articles being legally binding). For the Americans it was more a statement of aspirations than facts, which perhaps explains the declaration’s impotency in the face of countless global acts of genocide and state-sponsored terror in the years since its adoption. BS
1949
Big Brother
George Orwell
A frightening picture of a bleak, dark, totalitarian, and very possible future
The cover of a German translation of Nineteen Eighty-Four (1949) depicts the all-seeing eyes of Big Brother.
Big Brother is the omnipresent, public manifestation of “the Party,” a totalitarian government in Nineteen Eighty-Four (1949), a novel by British writer George Orwell (1903–50). Accompanied by the slogan “Big Brother Is Watching You,” the entity’s gnarled face appears on innumerable hoardings throughout the city in which the novel is set. But is he real, or simply a psychological symbol used by the Party to maintain control and turn its incompetence into glorious victories?
The novel’s central character, Winston Smith, is an employee in the Ministry of Truth who covertly loathes the party but can never decide if Big Brother is real or just a manufactured image. The novel never confirms Big Brother’s existence, but whether he exists or not is hardly the point. The Party requires only that the populace believe that he exists so that it can benefit from the intimidation that the belief engenders.
“You had to live … in the assumption that every sound you made was overheard …”
George Orwell, Nineteen Eighty-Four (1949)
Set in the mythical nation of Oceania, the novel is a bleak fable of state-controlled perpetual manipulation, propaganda, and mind control, set against the backdrop of an unending war (which may or may not be occurring) with the perhaps mythical, Party-generated enemies of Eurasia and Eastasia, used as a means to justify continuing repression.
The powerful science fiction novel was written in the midst of the Cold War between the United States and the Soviet Union. Having himself lived in Franco’s Spain and Russia and witnessed their excesses, Orwell produced a dark vision of a totalitarian world as a warning to the West of what could happen if communism and fascism were left unchallenged. BS
1949
Doublethink
George Orwell
Believing in two contradictory notions at the same time with equal conviction
Coined by British writer George Orwell (1903–50) in his novel Nineteen Eighty-Four (1949), the term “doublethink” refers to holding a belief in two or more contradictory ideas that cannot be true at the same time. “War is peace.” “Freedom is slavery.” “Your life is predetermined but you have free will.” Such contradictory notions cannot be true at the same time, so those who believe that they are true are engaging in doublethink.
In the novel, the government, formed by a single ruling political party, controls the citizenry with a range of coercive tools, such as propaganda, secret police forces, torture, ever-present surveillance systems, and spies. The people may only hold a limited set of beliefs that support the regime’s political goals. Sometimes, the government requires people to accept two beliefs as true even though they contradict one another. So, through the miracle of doublethink, workers in the Ministry of Peace can engage in war, while those at the Ministry of Love feel at liberty to use torture for the purpose of extracting false confessions.
“To be conscious of complete truthfulness while telling carefully constructed lies …”
Yijing (The Book of Changes, c. 1000 BCE)
Doublethink has become widely used in popular discourse to describe a situation in which a person expounds or supports contradictory notions. In psychological terms, doublethink is known as cognitive dissonance, and occurs when contrary notions cause someone to alter their beliefs to better fit the contradictions. Political or social beliefs may be examined and criticized with reference to doublethink, and doing so helps to clarify why followers of an ideology, party, or doctrine often fervently pursue ideas that seem contrary to everyone else. MT
1949
Magical Realism
Alejo Carpentier
A Latin American writing style that mixes magic and urban grit
This rare hardcover edition of One Hundred Years of Solitude (1967) by Gabriel García Márquez hints at a moment in the novel when a Spanish galleon is found in the Colombian jungle.
The first practitioner of magical realism was the Swiss-born, adopted Cuban author, essayist, and playwright Alejo Carpentier (1904–80). Carpentier introduced the world to the new genre with his novel El reino de este mundo (The Kingdom of This World, 1949), a fiction set around the Haitian Revolution in the early nineteenth century. In the novel, historical accuracy blended with themes of voodoo and timeless natural rhythms to produce a wholly original historical account.
“The existence of the marvelous real is what started magical realist literature, which some critics claim is the truly American literature.”
Luis Leal, writer and literary critic
However, Carpentier’s examples of magical realism were not as stylized and implausible as those by authors he would subsequently influence, such as Colombian novelist Gabriel García Márquez (b. 1927). In what is still considered the style’s pivotal work, One Hundred Years of Solitude (1967), García Márquez wrote of old men with angels’ wings and raindrops falling as flowers—things that were a long way from the improbable though plausible magic of Carpentier.
The term “magical realism” had been coined in 1925 by German historian and art critic Franz Roh (1890–1965), although there it was couched in an essay on a new style of painting, the New Objectivity. What Roh saw as “magic” was simply the beauty of the world around him; the term had little in common with the work of Carpentier and García Márquez. In those authors’ hands, magical realism is a mix of modern fiction, fantastical happenings, a layering of postcolonial ethnicities, multiple planes of reality, dreamlike states, ancient mythologies, and an age-old reliance on a sense of mystery, all in an effort to create a deeper, more authentic reality than conventional, realist approaches could ever hope to achieve. Distinct from the “fantasy” novel, which is set in fantastical though unfamiliar worlds, and often in the distant past or future, magical realism is rooted in the present day, among familiar images and themes, in a mix of mythical elements and fictional realism. BS
1949
The Second Sex
Simone de Beauvoir
An epochal book about the place and role of women
Simone de Beauvoir, pictured here in 1945, identified herself as an author rather than a philosopher, but today she is viewed as having made lasting contributions to the field.
A highly influential work of feminist philosophy, Le Deuxième Sexe (The Second Sex) was published in 1949 by French existentialist Simone de Beauvoir (1908–86). She began to write the book in 1946, only a year after French women were given the right to vote, and twenty-one years before they would legally be able to engage in birth control. Next door to France, in Switzerland, women would remain disenfranchised until 1971. Anyone wishing to understand where the fierce, almost wrathful strains in this tome came from would not have to look very far.
“One wonders if women still exist, if they will always exist, whether or not it is desirable that they should …”
Simone de Beauvoir, The Second Sex (1949)
Published in two volumes, The Second Sex was one of literature’s first attempts to view the sweep of human history from a strictly female perspective. In volume one, de Beauvoir delves into biology and psychoanalysis and the rise of the “superior” male from his beginnings as a nomadic hunter-gatherer, as she seeks to understand how women have come to be seen as the inferior “other” sex. In volume two, the writer follows the development of the female from childhood through adolescence to sexual maturity in an attempt to show that women are not created “feminine,” but become feminine as the result of a multitude of external influences and processes.
Not everyone appreciated de Beauvoir’s efforts. French existential author Albert Camus (1913–60) complained that it made Frenchmen look ridiculous. British poet Stevie Smith (1902–71) commented: “She has written an enormous book about women and it is soon clear that she does not like them …”
De Beauvoir refers to menstruation as a “curse” and to maternity as “servitude,” and she exhibits an almost paranoid hostility toward marriage, which she derides as a “narcissistic compromise.” No wonder, then, that the book qualified for the Vatican’s Index of Forbidden Books—a badge of honor, some might think, for this seminal attempt to correct age-old assumptions. BS
1949
Rhythm and Blues
Billboard magazine
A style of popular music of African American origin
“Rhythm and Blues” (R&B) was a term first used in 1949 by Billboard magazine. The phrase emerged as a result of the music industry’s attempt after World War II to find a new way to describe the musical category they had long known as “race records,” encompassing an amalgamation of predominantly gospel, jazz, and blues. In effect, record companies subsumed together and rebranded music assumed to be generally produced by black musicians for black audiences—and in that regard, R&B was more a marketing category than a specific musical genre. Marketing outside the African American community produced a distribution of R&B that deeply influenced U.S. musical culture.
A major influence on R&B music was jazz, which itself flourished in the 1920s and 1930s, and was a deep source of inspiration for R&B throughout its development until the 1950s. In the 1950s and 1960s, the relationship became more reciprocal, with R&B influencing jazz in turn. R&B was further influenced by vocal groups such as the Drifters and the Imperials, and also by the early recordings of gospel singers such as Sam Cooke (1931–64), Al Green (b. 1946), and Curtis Mayfield (1942–99).
“Gospel, country, blues, rhythm and blues, jazz, rock ’n’ roll are all really one thing.”
Etta James, singer
R&B was very influential in the development of rock ’n’ roll (which in the early 1950s was nearly identical to R&B), and was later influential in the development of soul and funk. R&B remains an important (though loosely defined) category in popular music, with electronic elements pushing it in the direction of urban music. Overall, it continues to be among the most influential genres in the United States. JE
1949
Role Model
Robert K. Merton
The view that admiring and emulating certain people is a means of self-betterment
U.S. sociologist Robert K. Merton (1910–2003) spent almost his entire career, from 1941 to 1979, teaching at New York’s Columbia University, and is renowned for his pioneering theoretical work in the analysis of social dynamics. One book, Social Theory and Social Structure (1949; later revised and expanded in 1957 and 1968), saw his addition to the study of sociology of new phrases such as “unintended consequences,” the “self-fulfilling prophecy,” and “role model.” The latter is the idea that people exist in society whom others look toward as exhibiting a desired aspect of behavior, later emulating aspects of their mentality, behavior, and lifestyle. The concept arose from Merton’s study of the socialization of medical students. He theorized that individuals will relate to an identified group of people and aspire to the social roles they occupy. Merton also noted that individuals may choose a succession of role models as they pass through life, emulating them in only restricted, specific ways rather than copying the totality of their lives.
Merton’s pursuit of what he called his “middle-range” approach to analyzing social structures led him away from abstract or grand speculations, focusing instead on the grittiness of everyday life. Combine this with an uncommon love and respect for language, and here was a man with a unique ability to create phrases and terms that were so innately evocative and relevant that they lost no time is passing from the world of academia into everyday speech.
Merton became a model of academic inquiry. The author of more than 200 scholarly papers, he was the first sociologist ever to receive his nation’s highest science award, the National Medal of Science—no mean feat for a young man born “into the slums of South Philadelphia,” as he once said, who walked into his first sociology lecture “purely by chance.” BS
1949
Infomercial
Tico Bonomo
A combination of information and a commercial punchline in a long, leisurely broadcast that provides entertainment in itself and persuades buyers wary of a hard sell
U.S. inventor and marketing personality Ron Popeil poses in front of a selection of the numerous gadgets that he designed and then sold using infomercials (1981).
When exactly the world’s first modern infomercial series came into being is difficult to pin down, but as good a candidate as any was an hour-long real-estate program that ran every Sunday on television in San Diego, California, in the 1970s. The concept of using television shamelessly to sell a product, however, can be traced back to the NBC TV series The Magic Clown (1949), created by Tico Bonomo (1925–99) as a vehicle to sell his taffy-like candy bar, Turkish Taffy. Bonomo’s program was followed by the development of direct marketing on U.S. television in the 1950s. One notable personality of that era was U.S. inventor and marketeer Ron Popeil (b. 1935), who invented numerous catchphrases while pitching kitchen gadgets, such as the Chop-O-Matic and the Veg-O-Matic, on television: “But wait, there’s more!” is just one well-known example.
“You can’t ignore the Topsy Turvy Tomato Planter which grows tomatoes upside-down and has been snapped up by some ten million customers.”
Scott Boilen, CEO Allstar Products Group
Infomercials, or Direct Response Television, commercials lasting anywhere from thirty to sixty minutes or more, began in earnest in the United States in 1984 when restrictions governing the number of advertising minutes permitted on broadcast television every hour were removed. Aimed at older audiences unresponsive to fast, high-pressure sales pitches, the infomercial had a more relaxed approach to selling. It took advantage of subtle production elements common to evening news programs, such as effective use of music, lighting, and flattering sets, all of which were integral to the delivery of the message.
Television stations favor infomercials because they reduce the need to purchase expensive “content.” Advertisers like them because they can run in the early morning when advertising rates are low compared to daytime or prime time. And infomercials work. In 2009, Allstar Products Group used the infomercial to sell 20 million of its “Snuggies”—blankets with attached sleeves—and these now enjoy cult status. BS
Contemporary
1950–Present
Part of the CMS (Compact Muon Solenoid) detector at CERN, the European particle physics laboratory. It was used with the large hadron collider to search for the Higgs boson.
The ideas from this period reflect a rapidly changing world. The pursuit of greater equality provided the driving force for social movements that promoted ideas such as civil rights, feminism, and Fairtrade, while concern for the environment came to the fore with innovations such as recycling, Freecycling, and passive housing. Scientific and technological developments, such as cloning and commercial space flight, brought ideas from science fiction to life, and the phenomenal success of Tim Berners-Lee’s concept to link the entire world together through shared data and information in a World Wide Web means that ideas both old and new can continue to be disseminated like at no other time in human history.
c. 1950
Pop Music
United Kingdom
A genre of music aimed at and enjoyed primarily by the young
The term “pop music” arose in the United Kingdom in the 1950s to describe popular music—rock ’n’ roll music and other genres—that was recorded for commercial purposes and was specifically aimed at the mass youth market. “Pop” is a shortened form of the term “popular music,” and as such it does not include such genres as jazz, folk, and world music. Pop songs generally tend to be around three minutes long, with lyrics on simple themes, and a catchy hook that draws in the listener.
By the early 1960s, British pop music had become distinct from rock ’n’ roll and was identified with the Merseybeat sound that originated in Liverpool, England. Its acceptance was spearheaded by the popularity of British and also U.S. bands, in particular The Beatles, who greatly influenced fashion, fellow musicians, album concepts, music production, and the way that pop music was marketed. Later in the 1960s, pop came to refer to all kinds of popular music.
“Come on, come on, come on, come on / Please, please me … like I please you.”
The Beatles, “Please Please Me” (1963)
Pop music’s association with Western, capitalistic culture was such that it was banned in many communist countries. The Soviet Union regarded The Beatles as epitomizing a Western debauchery that could pervert Soviet youth. Nevertheless, young people in communist countries still managed to obtain copies of Beatles music, and some credit it and other pop music with helping to bring about the cultural revolution that brought about the fall of the Iron Curtain in 1989.
Pop music has since spawned a huge, global industry. British and U.S. artists continue to dominate the pop music scene; other specific genres, such as Europop, are valued more locally. CK
c. 1950
Sound Collage
Various
A method of combining preexisting sounds or excerpts into one musical work
In the visual arts, the idea of combining “found” elements from various media into an original work was popularized in the early twentieth century by Georges Braque and Pablo Picasso. In music also, a collage combines elements from different existing pieces.
A musical collage is not a medley, in which different songs are made to fit together; rather the individual parts should retain their distinct flavor, and the result could sound chaotic. Collage works, in which different strains by one composer are interlaced, have existed for centuries: Battalia (1673) by Bohemian-Austrian composer Franz Biber (1644–1704), for example, superimposes several tunes in different keys, and the Second Symphony (1910–16) by U.S. modernist composer Charles Ives (1874–1954) at times superimposes so many different tunes that two conductors are necessary. Collage emerged as a common technique among the avant-gardists of the 1950s and 1960s; one of the most prominent examples of that period is the third movement of Sinfonia (1968) by Italian composer Luciano Berio (1925–2003), in which many works appear over the steadily sounding third movement of Mahler’s Symphony No. 2. In popular music, probably the most groundbreaking example is the song “Being for the Benefit of Mr. Kite” on the album Sgt. Pepper’s Lonely Hearts Club Band (1967) by The Beatles, in which producer and arranger George Martin (b. 1926) spliced in cut-up snippets of fairground organ recordings to supply a circus atmosphere.
In electro-acoustic music, composers in the musique-concrète tradition, beginning in the early 1940s, would create collage works by combining sound objects from daily life. In hip-hop today, the sequencing of brief sampled sections from preexisting recordings to create new works is standard, although, the result is not as chaotic as an avant-garde collage. PB
1950
Credit Cards
Frank McNamara
A small, usually plastic, card that authorizes the person named on it to charge goods or services to an account, for which the cardholder is billed periodically
A German credit card collector displays some of his collection in 1988, by which time many people were carrying wallets packed with various credit, debit, and store cards.
During the 1920s, a number of different companies in the United States began to issue credit cards as a way of offering their regular customers an easier method to make payments. However, these cards, issued by individual businesses, could only be used at a very limited number of locations, such as at the gas stations owned by the issuing company. It was not until 1950, when businessman Frank McNamara invented the Diners Club card, that the potential of credit cards as a much broader form of payment was realized. McNamara had the idea when he discovered that he had no money with him to pay a restaurant bill. The Diners Club card allowed cardholders to visit any restaurant that accepted the card to use it to pay for their meal.
“Money is just the poor man’s credit card.”
Marshall McLuhan, philosopher of media theory
The fundamental idea of personal credit had existed since ancient times, but it was not until the widespread adoption of credit cards that immediate, nearly universal credit became available to the average consumer. Credit cards offered consumers the freedom to make everyday purchases that they could pay for at a later time; they also eliminated the need to always have cash on hand or to hope a seller would accept a personal check. The system worked because the card issuer, usually a bank or other financial institution, implicitly agreed to pay the seller for purchases made with the card. Consumers who use credit cards, therefore, essentially take out instantaneous loans each time they use the card to make a purchase.
The introduction of credit cards had a profound effect on the spending habits of many people. With credit cards, it became second nature to buy now and pay later, because consumers did not have to worry whether they had enough money for their purchases in their bank accounts. All they had to worry about was the bill, but that was a comfortable distance away. MT
1950
The Fermi Paradox
Enrico Fermi
Out of our vast universe, why has no one come to visit humanity on Earth?
Young stars flare in the Carina Nebula; given the right conditions, any star has the potential to support life.
The universe contains almost 100 billion observable galaxies far older than our own. With all that potential for life to evolve, and time for that life to develop interstellar travel, journey through space, and locate us via the radio waves we emit, it is extraordinary that we have not yet been visited by aliens. In 1950, Italian physicist Enrico Fermi (1901–54) was sitting with a group of friends who were pondering why we seem so alone. To Fermi, in whose mind the universe was surely teeming with advanced civilizations, it seemed a vexing paradox. “Where is everybody?” he asked.
Some academics have sought to answer the Fermi Paradox by citing the Rare Earth hypothesis, which argues that the possibility of conditions being right for life to arise anywhere are so infinitely small, even considering the innumerable stars in the universe, that it is very likely that we are the only life there is. Other solutions include the possibility that “they” are already here, or that they are signaling but we have not recognized their signals.
“Civilizations rise … within their own solar systems, with no interstellar colonization.”
Jennifer Ouellette, science writer
Fermi suggested that galactic colonization could be achieved quite easily, simply by inventing rocket ships and populating them with self-replicating robots. That may be a naive idea, but the ten-billion-year history of the universe certainly seems to allow enough time for a race to evolve and, by some estimates, explore a region of the galaxy 250 times over. However, if advanced life did evolve, why would they come here, to our obscure corner of our average-looking Milky Way galaxy? After all, as the great cosmologist Carl Sagan (1934–96) once said: “It is a very big cosmos.” BS
1950
Prisoner’s Dilemma
Merrill Flood and Melvin Dresher
A model shows how human cooperation and conflict influence self-preservation
Consider this: two people rob a bank and are caught and placed in separate cells. Each is understandably more concerned about their own lot than that of their accomplice, and each is, unbeknown to the other, offered a deal—if you testify against your accomplice, the charges against you will be dropped. If both take the bait and confess, the prosecutor gets two convictions and neither goes free. And therein lays the dilemma. Should the prisoners take the chance that the other will not confess, and so confess and gain their own freedom, or steadfastly refuse to confess and risk being the one to remain imprisoned if the other confesses? Individually each is better off confessing if the other does not, yet if both confess, their situation worsens considerably.
“Can ‘prisoners’ … sustain cooperation when each has [an] incentive to cheat?”
Avinash Dixit, economist
The dilemma was formulated in 1950 by U.S. mathematicians Merrill Flood (1908–91) and Melvin Dresler (b. 1927) while working at the Rand Corporation on game theory—mathematical models examining conflict and cooperation between intelligent adversaries—and its application to nuclear strategy. The dilemma highlights the differences in rational thinking of the individual as opposed to the group. Members of a group who pursue individual self-interest could become worse off than group members who place the best interests of the group above their own. The dilemma assumes a bleak view of human nature, that we are inherently selfish and care little for what happens to others when our own self-interest is threatened, although this holds true more for large groups than small ones such as families. BS
c. 1950
Enneagram
Oscar Ichazo
A system of self-understanding based on the mystical properties of shapes
The enneagram is a geometrical figure comprising a triangle and an irregular hexagon overlapping to form a nine-pointed shape inside a circle. Armenian spiritual leader George Gurdjieff (1866–1949) was the first to suggest that the figure had mystical properties, and in the early 1950s it found a role at the heart of a personality profiling system developed principally by Bolivian self-development instructor Oscar Ichazo (b. 1931). Ichazo’s Enneagram of Personality was based on a distinctive numbering of the nine points and an analysis of the relationships between them.
In 1971, Ichazo opened his first training school in northern Chile, dedicated to instructing others in the interpretation and use of the enneagram system. The success of the program meant that by the early 1990s there were more than forty training centers located around the world. In these schools, practitioners are taught to recognize nine different “ego fixations” or personality types along with their associated basic fears, desires, and temptations. The system is intended as an exercise in attaining greater self-awareness and understanding, and enabling others to do the same.
“All knowledge can be included in the enneagram and … it can be interpreted.”
Marjorie Perloff, The Encyclopedia of Aesthetics (1998)
The original Enneagram of Personality system has been adapted in a variety of ways. A Jesuit priest named Bob Ochs, for example, taught the system alongside religious doctrine in the early 1970s. However, the various manifestations of the enneagram system have been criticized by both religious and secular groups as having little or no scientific grounding. The profiles offered are claimed to be either somewhat arbitrarily assigned or too broad to be of use in meditation. LW
1951
Musical Indeterminacy
John Cage
A compositional practice in which the musical outcome is not clearly defined
John Cage prepares for a performance at the Fondation Maeght, Saint-Paul de Vence, France, in 1966.
Indeterminacy is a term attributed to U.S. composer John Cage (1912–92), following the completion of his first indeterminate instrumental work, Imaginary Landscape No. 4, in 1951. It describes compositional practices in which the sounding outcome—either the length or the sound of individual parts, or both—of a work is not clearly defined. Thus the performer becomes a true co-creator of the work.
Cage’s works extended the musical realm, as he drew inspirations from Eastern philosophy, the New York School of visual artists, and Fluxus, an avant-garde group of artists. While indeterminacy is related to improvisation, there are some fundamental differences: instead of the fixed harmonic or structural framework of a typical improvisatory composition, performance actions are described. In Imaginary Landscape No. 4 a radio is used, with whatever program that is on becoming part of the piece. Indeterminacy reached the highest level with Cage’s composition 4’33” (1952), for which the descriptions and actions in the score are: three movements totaling the time four minutes and thirty-three seconds; at least one performer; no sound should be performed by the performer(s). A performance of the work thus consists of ambient noise, including sounds created by the audience.
“ …the indeterminate score must be treated as … a process not a product.”
Robert Zierolf, Indeterminacy in Musical Form (1983)
Musical indeterminacy raised questions that are still relevant to musical discourse: Who is the creator of an indeterminate work? Which sounds are considered musical? What constitutes a good performance of an indeterminate work? and, perhaps the most intriguing question, What defines a musical work? PB
1951
Televangelism
Fulton Sheen
A new breed of evangelists present their own versions of Christianity on television
Televangelist Dr. Robert H. Schuller poses in 1980 outside the newly constructed Crystal Cathedral in Garden Grove, California. Seating 2,736 people, it cost $18 million to build.
Televangelism—the use of the medium of television to evangelize or communicate the Christian gospel—started as a peculiarly U.S. phenomenon in the 1950s but has since spread to much of the Western world. Televangelism is lambasted by some critics as fraudulently misrepresenting the gospel, with as much time spent on fundraising as preaching; many preach the controversial “gospel of prosperity,” which holds that God blesses those He favors most with material wealth. Televangelists operate in a medium—television—that has its own set of values and pressures, which, critics argue, are not always compatible with the message being preached.
“We are all born with the power of speech, but we need grammar. Conscience, too, needs Revelation.”
Fulton Sheen, Life is Worth Living, TV program
Time magazine dubbed Roman Catholic archbishop Fulton Sheen (1895–1979) as the first U.S. televangelist after his weekly program Life is Worth Living, filmed at New York’s Adelphi Theater, debuted on the DuMont Television Network in 1951. Sheen used a blackboard to help explain his themes. This debut was a long way from the shimmering reflections of the Crystal Cathedral in Los Angeles, built by televangelist Dr. Robert H. Schuller (b. 1926). Schuller began his ministry standing on the roof of his car at a drive-in theater and became the best-known U.S. televangelist with his own weekly show, Hour of Power, which began in 1970 and on which Schuller preached until 2006.
A common thread among televangelists is the relentless worldwide expansion of their ministries, with some preachers, such as Pastor Pat Robinson, being broadcast into as many as forty countries. Critics from within the church claim that the extreme fundamentalism of U.S. churches and society is a direct result of decades of televangelists promoting right-wing, politically-inspired messages that have altered the once-forgiving fabric of the U.S. church. Television is not just a source of news and current affairs. It is also a powerful tool for propaganda. BS
1951
Peer Pressure
Solomon Asch
A study into how an individual is coerced into conforming with majority opinion
Asserting their maturity and independence, two boys light up cigarettes while an aghast younger companion looks on. Peer pressure dictates that he will probably attempt one, too.
Early in the life of Polish-born U.S. psychologist Solomon Asch (1907–96), his Jewish parents left a glass of wine for the prophet Elijah at Passover. “Do you think he will take a sip?” the young Solomon asked. “Definitely,” replied his uncle. And so with one suggestive comment that filled him with excitement and expectation, the boy stared at the glass of wine, and was sure he saw its level drop—just a little. It was remembering this experience that first prompted Asch’s interest in investigating how individuals can be coerced by others into modifying their thoughts or behavior.
“The ugly reality is that peer pressure reaches its greatest intensity at just the age when kids tend to be most insensitive and cruel.”
Walt Mueller, authority on youth culture
It was in 1951 that Asch, now working at Swarthmore College in Pennsylvania, staged his conformity experiments designed to demonstrate the human susceptibility to conforming to majority opinions. Asch selected 123 male subjects and secretly told a small percentage of them to provide obviously wrong answers to certain questions, in order to ascertain how many in the group would follow them by providing at least some of the same obviously incorrect answers. There were three trials, and in one, three-quarters of the participants repeated the deliberately wrong answers given by the few that Asch had set up. Over the course of the three trials, the overall level of conformity was 37 percent.
Asch explained the results in terms of peer group pressure. A peer group overwhelmingly comprises human beings of equal status and age who tend to inhabit the same social strata. The term “peer pressure” referred to the power and dynamics used within a peer group to influence the actions, attitudes, or values of someone within the group. Peer pressure is a pervasive psychological construct, particularly among the young who spend large amounts of time in school and various other structured groups, and tends to most affect those harboring low levels of self-esteem. BS
1951
Two Dogmas of Empiricism
Willard van Orman Quine
No truths can ever be accepted as fact without first being tested
“Two Dogmas of Empiricism,” written by the U.S. philosopher and logician Willard van Orman Quine (1908–2000), first appeared in The Philosophical Review in 1951 and lost no time in becoming an object of controversy. Here was an attack on two of the pillars of the logical positivists: the analytic-synthetic distinction (the notion that there are both factual truths and logical truths), and the long-held belief that there are particular truths that no possible future experience will be able to cause us to consider false. In “Two Dogmas,” Quine not only thought it unlikely that distinctions in the analytic-synthetic doctrine could ever be made, but he also cast doubt over whether or not absolute truths—those “hard core” truths capable of being tested and verified, which come to us primarily through our sensory experiences of the world around us—could ever be abandoned in the face of any future events. He was, however, prepared to concede that some propositions could be jettisoned as a last resort.
“Analyticity can be defined only in terms of other equally suspect concepts …”
E. Sober and P. Hylton, Quine’s Two Dogmas (2000)
The logical positivists had installed the “analytic–synthetic” as central to their philosophy. “Analytic” truths were those supposed innate truths grounded in meanings independent of facts, while “synthetic” refers to those facts that are grounded wholly in immutable facts. Both dogmas, Quine argued, are ill-founded. After explaining why, he went on to offer a theory of his own. Called epistemological holism, this holds that no theory can be tested in isolation but can be arrived at only after a thorough consideration of innumerable and largely undetermined factors; also, no truths are ever the result of a priori knowledge. BS
1951
Munchausen Syndrome
Richard Asher
A mental disorder leading to self-harm and attention-seeking for imagined illnesses
A still from The Adventures of Baron Munchausen (1988), a movie directed by Terry Gilliam.
Munchausen syndrome, named after Baron Hieronymus von Münchausen (1720–97), a German nobleman noted for telling fantastical stories of his own mythical exploits, was first observed and named in 1951 by British endocrinologist Richard Asher (1912–69), whose findings were published in the British medical journal The Lancet that same year. Munchausen’s is a psychiatric condition in which a person fakes or simulates injury or illness in order to be hospitalized and treated as a patient, often changing doctors and hospitals in the hope of being the center of medical attention. Munchausen’s is a chronic variant of a factitious disorder—an illness intentionally brought about solely for the purpose of gaining attention—and sufferers are often eager to undergo invasive surgery. Like Baron von Münchausen, they are also known to make false and incredible claims about their own lives.
“They show pseudologia fantastica and travel widely for attention (wandering).”
D. A. Swanson, The Munchausen Syndrome (1981)
What triggers Munchausen syndrome is not known because sufferers insist their conditions are real. Symptoms include an often spectacular-looking medical history, a high degree of medical knowledge, inexplicable relapses, and new symptoms that appear after other tests prove negative. Munchausen sufferers have been known to poison themselves, interfere with diagnostic evaluations, contaminate urine samples, open up wounds, and fail to take prescribed medications.
Diagnosis is difficult and only confirmed after a careful study of the patient’s behavior and responses to treatment. For cognitive behavior therapy to be effective, a person has first to admit to falsifying symptoms, which a Munchausen sufferer will never do. BS
1951
Rock ’n’ Roll
United States
A type of popular music characterized by a heavy beat and simple melodies
Bill Haley, of Bill Haley and His Comets, performs in New York City in 1960.
One of the earliest references to “rock” in a song title is found in “My Man Rocks Me,” a blues ballad from the 1920s that had plenty of the overtly sexual overtones that would come to define rock ’n’ roll almost three decades later. Early hints of what was to come could also be seen in the Boston Beat’s “Big Band Boogie” (1939), and later Jimmy Preston’s “Rock the Joint” (1949).
The five essential aspects of any rock ’n’ roll song are a 4/4 beat, a strong back beat, an equally strong rolling rhythm, a blues-scale melody, and at least one electric guitar. Rock ’n’ roll was beginning to emerge in the southern United States in the late 1940s and grew out of an amalgam of African rhythms and European instrumentation. Its immediate predecessor was the genre known as “jump blues,” a 1940s offshoot of Rhythm and Blues (R&B) that was popular in black dance halls. Precisely when the transition occurred is difficult to say because the change was gradual rather than a paradigm shift. One characteristic of the new sound in its very early years was its syncopated rhythms (additional impulses in between the beats), similar to those found in swing and dance music.
“Rock and roll music, if you like it, if you feel it, you can’t help but move to it.”
Elvis Presley, singer
In 1951, Pennsylvania-born radio disc jockey Alan “Moondog” Freed (1921–65) noticed that white teenagers were beginning to discover black R&B music. He gave it more air time and renamed it “rock ’n’ roll,” thinking that the black associations with “R&B” might dent its popularity with white audiences. He need not have worried. On April 12, 1954, Bill Haley and His Comets recorded “Rock Around the Clock,” the song that brought rock ’n’ roll into the mainstream. BS
1952
Scientology
Lafayette Ronald Hubbard
A pseudoscientific cult promising self-improvement through acts of auditing
Lafayette Ronald Hubbard (1911–86) entered the U.S. Navy in 1941 on the back of some impressive references. He never, however, achieved his apparent promise. Over the next two years Hubbard was involved in several incidents, including being sent home from a posting in Australia after a dispute with his superiors and mistakenly sailing his ship into Mexican waters and then conducting gunnery practice there. Hubbard was subsequently relieved of his command, and it was perhaps then that he first began thinking of founding the Church of Scientology.
First came his ideas about the relationship between mind and body, which he labeled “Dianetics.” Hubbard saw everyone’s goal to be subjugation of our “reactive minds,” which prevent us from being more ethical, centered, and happy. Dianetics involved an act of auditing, the asking of questions designed to eliminate past experiences that encourage and feed the negative, reactive mind. Only by facing and answering these questions can our true potential be realized.
“Scientology is evil … its practice is a serious threat to the community …”
Justice Anderson, Supreme Court of Australia
Hubbard incorporated these ideas into the body of beliefs and practices he called Scientology, established in 1952, teaching that the subconscious mind restricts us from being all that we can be, and that everyone needs to be freed from negative thoughts, which he called engrams. But over the years, the spirit of self-improvement in this pseudoscientific cult has not attracted as much attention as the reports of adherents being brainwashed, bullied, and harassed to donate their money. Litigation is used aggressively to limit the damage caused by these reports. BS
1952
The Urey-Miller Experiment
Stanley Miller and Harold Ure
An experiment to prove that life can arise from inorganic matter
In 1952 a chemistry graduate, Stanley Miller (1930–2007), and Nobel Laureate Harold Urey (1893–1981), who had proposed the idea of a “reducing atmosphere” (that Earth initially had a primitive atmosphere that became more complex over time), began a series of experiments designed to mimic that nascent atmosphere to see if it could be prompted through electrical discharges to generate organic compounds from inorganic matter, and so make possible the beginnings of life on Earth.
They constructed a series of three apparatuses—all of them closed systems in the form of a loop—into which was circulated a mix of hydrogen, methane, and ammonia. A container of boiling water added vapor to the gaseous soup, simulating a primitive vapor-heavy atmosphere, which was then subject to a high voltage electrical charge before going through a condenser to cool and trickle back into the original vial, to begin the cycle again. After two weeks, Miller and Urey noted that 15 percent of the carbon within the system had formed into organic compounds, and that a tiny percentage of the carbon had gone on to form amino acids. There were also traces of hydroxyl acids and urea, all of which are the building blocks of life.
“ … significant in convincing scientists that life is likely to be abundant in the cosmos.”
Carl Sagan, cosmologist
The results were published in an edition of Science magazine in 1953, and promptly ignited the public’s imagination. In no time the term “prebiotic soup” entered popular culture. The experiment had significant shortcomings: neither oxygen nor nitrogen, both vital ingredients in Earth’s atmospheric composition, was used, and many scientists now question the assumptions behind the reducing atmosphere theory. BS
1952
Paralympic Games
Ludwig Guttmann
The apex of major international sports competitions for disabled athletes
The U.S. and the Netherlands compete at the International Stoke Mandeville Games, July 30, 1955.
The Paralympic Games evolved from a competition in the 1940s involving World War II veterans with spinal-cord injuries. Their organizer, neurologist Dr. Ludwig Guttmann (1899–1980), called them the paraplegic games. Guttmann believed that the therapeutic benefits of sports were not only important to improving his patients’ physical rehabilitation but also helped to restore their confidence and self-respect.
During the war, Guttmann had organized wheelchair polo and basketball games for his patients at Stoke Mandeville Hospital in England and used the occasion of the 1948 London Summer Olympics to launch the first Stoke Mandeville Games. Those particular Games only involved sixteen patients competing in archery. Guttmann desired a truly international experience and in 1952 he included Dutch ex-serviceman in the competition. This would lead, he hoped, to the worldwide recognition that sports should be integral to any rehabilitation program. In 1960, the International Stoke Mandeville Games (now officially named the first Paralympic Games) were held alongside the Rome Summer Olympics and involved 400 athletes from twenty-three countries competing in a variety of wheelchair sports. The first Winter Paralympics began in 1976. The competition has expanded slowly to include amputees, the blind, and athletes with cerebral palsy.
“Without his efforts, they would have been condemned to the human scrapheap.”
British Medical Journal (1980)
In recognition of its status in the Olympic Movement, the “para” is now intended in the sense of an event run in parallel with the Olympic Games. The aim is to “enable Paralympic athletes to achieve sporting excellence and inspire and excite the world.” TJ
1952
Performance Art
John Cage
A style of art that imbued the everyday with the potential to stimulate our senses
John Lennon and Yoko Ono spent a week in bed in 1969 as an artistic protest against world violence.
The origins of performance art lie in the Futurist and Dadaist movements of the early twentieth century. However, it did not come fully to life as an artistic style until 1952, when an untitled event was held at the Black Mountain College in North Carolina. Orchestrated by a group led by composer John Cage, it featured a number of performances from different disciplines that took place within a fixed time bracket, but did not hold any narrative or causal relation to each other.
From there, a new approach to art began to evolve, in which the idea behind the art took precedence over the aesthetic—it was the concept that made and justified the art. By the 1960s performances were no longer even referred to as performances: they were “happenings,” “events”—involving impromptu gatherings at unlikely venues. Artists rejected clear approaches and narratives, emphasizing instead the validity of the viewer’s experience. Performers would come on stage for an evening of brushing their teeth or sitting reading a newspaper, or, as Yoko Ono did with John Lennon in 1969, laying in bed for a week. Performance art rarely resulted in a tangible object that could be purchased and displayed.
“ … ‘performance artist’ … includes just about everything you might want to do.”
Laurie Anderson, composer and performance artist
Art was being used in new and dynamic ways to comment on emerging social and ethical concerns in the politically charged era of civil rights and the anti-war movement. Female artists used their bodies to challenge attitudes to women. Within ten years performance art had gone global and direct to a new audience, eliminating the need for galleries, museums, and hierarchical interference. BS
1952
Groupthink
William H. Whyte
A psychological phenomenon with potentially disastrous consequences
“Groupthink” is what occurs when a group of people makes a bad decision because its individual members give in to group pressures. For example, when group members’ desire for harmony causes them to attempt to minimize conflict, they tend to conform their opinion to a decision that they believe will achieve consensus within the group. This desire means they fail to examine alternative ideas, solutions, or viewpoints that could be seen as controversial. The insistence on loyalty may lead to a prevailing, potentially fabricated, certainty that the chosen outcome was the best option. Groupthink can stifle creativity and independent thinking, and lead to irrational and flawed decisions.
The phrase “groupthink” was coined in 1952 by U.S. urbanist William H. Whyte (1917–99) in an article in Fortune magazine, in which he examined how industrial society was attempting to master group skills. He argued that society was subject to groupthink in a negative fashion, and the individual had become “completely a creature of his environment, guided almost totally by the whims and prejudices of the group, and incapable of any real self-determination.”
“We are not talking about mere instinctive conformity … [but] rationalized conformity.”
William H. Whyte, Fortune (1952)
U.S. psychologist Irving Janis (1918–90) went on to develop the groupthink theory to describe systematic errors in collective decision making. He studied several U.S. military conflicts, and his books Victims of Groupthink (1972) and Groupthink: Psychological Studies of Policy Decisions and Fiascoes (1982) outline how he felt that groupthink had led to unwise decisions, such as the failure of the U.S. government to anticipate the Japanese attack on Pearl Harbor in 1941. CK
1952
DSM
American Psychiatric Association
The creation of a standard criteria for classifying mental disorders
The publication in 1952 of the Diagnostic and Statistical Manual of Mental Disorders (DSM) was not the first time an attempt had been made to categorize known psychiatric conditions. In 1917 a manual containing twenty-two diagnoses was compiled by a group of psychiatrists and government bureaucrats, and by the 1920s almost every teaching center in the United States and many in Europe had their own individual system. In 1927 the New York Academy of Medicine began to move toward national nomenclature of disease, but it was not until the onset of World War II in 1939 that the need for a comprehensive reference book of mental disorders became apparent, as psychiatrists across the country became involved in the assessment and selection of tens of thousands of new recruits.
“DSM-1 … was the first official manual of mental disorders to focus on clinical utility.”
DSM fourth edition, text revision (2000)
By the late 1940s the call from psychiatrists across the United States for a renewed effort to categorize psychiatric conditions, especially personality disorders and stress-related illnesses, resulted in the publication of the first DSM. The manual was (and still is) a primary diagnostic tool, developed in conjunction with the American Psychiatric Association.
Despite most clinical diagnoses tending to be the result of intuitive thinking and observation on the part of the clinician, the fact remains that categorizing disorders based on medical theories rather than what is observable has always been considered scientifically prudent (cancers, too, are categorized according to their genetic characteristics). A uniform reference provides clinicians with a framework in which predictions, understanding, and, in time, cures can be realized. BC
1953
DNA Double Helix
James Watson and Francis Crick
The unlocking of the structure of life’s building blocks
James Watson (left) and Francis Crick with their model of part of a DNA molecule, May 1953.
As early as the 1940s scientists the world over knew that DNA was very likely life’s building block. They also knew that it was composed of adenine, thymine, cytosine, and guanine. The only problem was that no one had the slightest idea what a strand of DNA might look like. A photograph of a DNA protein, now referred to as Photograph 51, taken in the early 1950s by Rosalind Franklin (1920–58), an X-ray diffraction expert at King’s College, London, seemed to suggest the braided twist of a helix. But it was far from certain.
In 1951 James Watson (b. 1928) and Francis Crick (1916–2004), the helix’s eventual co-discoverers, had theorized a triple helix, but their theory was flawed, as was that of chemist and Nobel Laureate Linus Pauling. Finally, in February 1953, realizing that the lack of a three-dimensional representation of the gene was the core problem confronting all molecular biology, Watson and Crick, with the benefit of the research of biochemist Erwin Chargaff (1905–2002), successfully calculated the pairing rules that led to the precise copying of molecules, the process essential for heredity, and in so doing uncovered the famed double-stranded helix.
“The structure is an open one and its water content is rather high.”
James Watson and Francis Crick, Nature (April 25, 1953)
Solving the pairing question was critical; DNA’s four nitrogenous bases always pair the same way: thymine to adenine, and cytosine to guanine. These pairings fit neatly between the gene’s two helical sugar-phosphate backbones and gave it its shape. Its form also meant it could “unzip” to both copy itself and carry genetic information. The world of biology would never be the same again. BS
1953
The Beetle in a Box
Ludwig Wittgenstein
Language must be shared if it is to have any meaning
Imagine that everyone owns a small box, and in that box there is something only the owner can see. When asked what is in the box, everyone provides the same answer: a beetle. Soon people are talking about beetles even though it is entirely possible that each person means something completely different.
When René Descartes (1596–1650) introduced the idea of mind-body dualism, he believed that it was possible to discount other people, other minds, and all outside concepts and build a system of knowledge based solely upon one’s own thoughts. If this kind of Cartesian dualism is correct, it is possible for individuals to know what concepts and words mean simply by thinking about them. It is possible, therefore, for an individual mind to use a private language that no one else can ever know. Ludwig Wittgenstein (1889–1951) doubted this idea, and part of his book Philosophical Investigations (published posthumously in 1953) was devoted to debunking this notion of Cartesian dualism by showing that a private language is nonsensical. According to the beetle in the box hypothetical, all meaningful language must have some social basis and cannot exist solely in a person’s mind. So, if a private language makes no sense, then referring to the idea of separate, inscrutable minds is meaningless.
“The limits of my language means the limits of my world.”
Ludwig Wittgenstein
Wittgenstein’s beetle analogy, and his notion of private languages, became necessary reading for anyone studying or discussing the nature of human language. His observations on meaningful language necessarily being nonprivate are still widely discussed and debated today. MT
1953
“If a Lion Could Speak …”
Ludwig Wittgenstein
A philosophical conundrum seemingly concerning shared consciousness
One of the most quoted and beguiling statements in the history of philosophical inquiry, “If a lion could speak, we couldn’t understand him,” was written by the Austrian-British philosopher Ludwig Wittgenstein (1889–1951) and published posthumously in 1953 in his book Philosophical Investigations. What he meant by the statement is still open to debate and it remains one of the discipline’s great subjective teasers. Is it a comment on how we confront consciousness outside of our own? If a lion spoke in an audible language, why would we not be able to understand him? Why choose a lion? More to the point, would a talking lion even be a lion at all, and would his mind be a lion’s mind?
Wittgenstein may have been trying to unravel a philosophical riddle, or perhaps just delighting in creating one, or he may simply have been unwittingly bowing to “compulsive anthropomorphism,” the human need to give animals, and even innate objects, consciousness. The human race has done that for millennia: for example, in the Jewish Talmud, a dog drinks poison to save the life of its master. Humans have always been fooled by the apparent capacity of an animal to form conscious responses to inadvertent cues. So, our conditioned response to Wittgenstein’s statement is, of course a lion can speak. And that leaves one further question: why would we not understand one if it did?
“I don’t know why we are here, but I’m pretty sure that it is not in order to enjoy ourselves.”
Ludwig Wittgenstein
Another interpretation remains: was Wittgenstein only reminding us of humanity’s gross neglect of the natural world, that before anyone can comprehend what is being said, we first need to better understand the creature that is saying it? BS
1954
Value-added Tax
Maurice Lauré
A consumption-based tax that strives for a more equitable approach to taxation
A value-added tax (VAT) is a sales tax levied only on the “value that is added” at each point in the economic chain of supply of a particular product or service, and is ultimately paid for by the consumer at the point of sale. The first modern VAT was introduced in France on April 10, 1954, as an indirect taxation on consumption by the French economist Maurice Lauré (1917–2001), a joint director of the French taxation authority. Although a version of the idea had been suggested in Germany in 1918, it was Lauré’s system that, for the first time in a modern economy, took the burden for collecting taxes away from the taxation authorities and retailers, and placed it in the hands of the taxpayer.
West Germany adopted the VAT in 1968, after which most other Western European countries followed suit. However, Lauré’s tax was not the first time that the idea had been trialed. A consumption tax was introduced in the Netherlands by the duke of Alva in 1569, and in 1342 the first historical mention of a tax on goods and services occurred in Spain where a tax on each stage of production was initiated to help boost the state’s dwindling coffers.
“VAT has proved to be one of the EU’s most enduring exports.”
Adam Victor, The Guardian (December 31, 2010)
A VAT is now a prerequisite for any country seeking membership in the European Union and is an integral part of the taxation system of more than 145 nations. The only Organization for Economic Cooperation and Development nation that does not have a VAT is the United States. To limit the tax’s regressive nature (meaning that the poor pay more), most countries impose smaller VAT rates for basic consumer items than they do for luxury items. BS
1954
Warehouse Club
Sol Price
A retail store that sells discounted goods in bulk quantities to paying members
Warehouse clubs, also known as warehouse stores or wholesale clubs, were introduced by businessman Sol Price (1916–2009) in the United States. Unlike other retail stores, warehouse clubs require shoppers to buy a membership to shop there. In return, the clubs provide members with the chance to buy retail goods and services at discounted prices and in bulk quantities. Individual stores are large, expansive, and warehouse-like, hence the name.
Sol Price was an attorney in San Diego, California, in 1954 when he inherited an empty warehouse and decided to open a member-only retail store that limited goods to government employees, a store he called FedMart. The format quickly became successful, with Price expanding it to include a wider range of items, including groceries and gasoline, as well as an in-store pharmacy and optical department. After opening multiple new stores and selling FedMart in the 1970s, Price opened a new warehouse club, Price Club, which later became Costco.
“In the final analysis, you get what you pay for.”
James Sinegal, Costco CEO and cofounder
Warehouse clubs exposed the average consumer to something closer to a wholesale market, where they could, like large retailers, take advantage of the lower prices that come with buying larger quantities. Though the advantage of buying in bulk is well known, most consumers could not afford to—or did not have the ability to—buy large quantities of items at a wholesale discount prior to the introduction of warehouse clubs. As an added element, the warehouse clubs also provide the feeling of exclusivity that comes with having to pay for the privilege to shop. MT
1954
Psychedelic Experience
Aldous Huxley
The use of psychedelic drugs to achieve a life-changing visionary experience
A woman dances in the glow of psychedelic light at the Avalon Ballroom in San Francisco in the 1960s.
The term “psychedelic experience” is as vague as the state it is attempting to describe. Individual experiences of hallucinogenic drug taking vary, but most agree that perceptions are made brilliant, intense, and more immediate. Things that would not normally be noticed are seen, and achieve depths of significance and clarity beyond their everyday essence. It is an altered, ineffable state that has similarities with ritualistic, mystical traditions found in Eastern patterns of thought.
In 1954 the great science fiction author Aldous Huxley (1894–1963) published The Doors of Perception, a book that described his experience of being under the influence of the psychedelic drug mescaline. In taking the drug, Huxley wished to have a glimpse of the higher states that visionaries such as William Blake (1757–1827) had described (the title of his book is in fact taken from a line in Blake’s work, The Marriage of Heaven and Hell, c. 1790). Huxley was profoundly affected by the experience, and concluded The Doors of Perception by arguing that mescaline (or something like it) should replace alcohol and other drugs because it provides the same sense of escapism and altered state of consciousness without the negative side effects.
“To be enlightened is to be aware, always, of total reality in its immanent otherness …”
Aldous Huxley, The Doors of Perception (1954)
The Doors of Perception received a mixed reception, but was nonetheless instrumental in paving the way for the psychedelic counterculture of the 1960s. Huxley continued to take psychedelic drugs for the remainder of his life, and was also influential in introducing other notable figures, such as poet Allen Ginsberg and psychologist and writer Timothy Leary, to the possibilities of psychedelic experimentation. BS
1954
Wicca
Gerald Gardner
A duo-theistic neo-pagan witchcraft religion oriented toward nature
While witchcraft was traditionally practiced—and widely persecuted—in Europe for centuries, Wicca’s relationship to pre-Christian forms of European witchcraft is largely based on rites or traditions adopted, and adapted, from a variety of pagan traditions, as opposed to a direct continuity over time. The Wicca religion became popular in England after English author Gerald Gardner (1884–1964) published Witchcraft Today, a guide to modern witchcraft, in 1954. The term “Wicca,” though not used by Gardner, was coined in the 1960s to mean pagan witchcraft.
While Gardner is widely credited for the development of the modern Wicca religion, his beliefs and teachings are not universally accepted by all practitioners of the religion, or by those of other neo-pagan religions. There is no single organization or group that represents all practitioners, and followers hold a range of beliefs. Many Wiccans hold that a female goddess and a male horned god exist as equal codeities; others venerate multiple gods or spirits. However, a belief in living in harmony with nature, the practice of magic, lunar and solar cycle rituals, and adherence to a moral code are common to all.
“The first time I called myself a ‘Witch’ was the most magical moment of my life …”
Margot Adler, author and Wiccan priestess
Wicca is relatively new, but it appears to be growing. There are about 1.2 million practitioners of witchcraft in the United States alone; numbers worldwide are difficult to assess. Wicca has influenced aspects of popular culture, with many novels, films, and television shows incorporating Wiccan ideas. It has also changed many of the negative associations that people had made with witchcraft and pagan religions. MT
1955
Family Therapy
Murray Bowen
A suggestion for an entirely new approach to understanding how families work
Born into a family from Tennessee and the oldest of five children, Murray Bowen (1913–90) earned his medical degree in 1937. In 1946, after serving in the army during World War II, he began his psychiatric training. In 1954 he began a five-year study involving families at the National Institute of Mental Health in Bethesda, Maryland, and it was there that he developed his theories about family dynamics.
At the time, the concept of families sitting together to discuss their issues and interrelations was largely unheard of. Bowen spent the late 1950s working with families that had an adult schizophrenic child living at home, and learned to observe the unfolding of family dramas. He became adept at observing human behavior and began what he called his Bowen Family Systems Theory. Bowen regarded the family as a single emotional entity, each member of which is intimately connected with the other. His theory contained eight interlocking concepts, including the triangle, or three-person relationship; the differentiation of self where a person realizes their interdependence on others; the emotional cut-off that involves ignoring unresolved issues through avoidance; and sibling positioning, the idea that a child’s position (middle child, etc.) can contribute to the development of particular traits.
“The run away is as emotionally dependent as the person who never leaves home.”
Murray Bowen
Bowen was suspicious of psychoanalysis and instead used biological terms, sociology, and even anthropology to help make sense of human behavior. He was a pioneer who spent the rest of his life trying to understand how generations of family can shape us and help make us who we are. BS
c. 1955
American Civil Rights
Rosa Parks
A protest movement against racial discrimination in the United States
Rosa Parks sits in the front of a bus in Alabama on December 21, 1956, after segregation was ruled illegal.
The American Civil Rights Movement of the 1950s and 1960s was a mass protest movement against racial segregation and discrimination in the United States. It aimed to raise public awareness of the plight of African Americans by nonviolent forms of protest. The movement monitored the activities of the racist organization the Ku Klux Klan and started a drive for voter registration; activists protested through sit-ins, marches, and rallies. Protestors sometimes suffered brutal treatment at the hands of the authorities.
“We can never be satisfied as long as a Negro … cannot vote.”
Martin Luther King, Jr., Washington D.C., 1963
The American Civil Rights Movement came into being in 1955. Although there had been some protests prior to then, 1955 was a pivotal year because it was when Rosa Parks (1913–2005), an African American woman who purported to be tired of the injustice, refused to give up her seat to a white man on a bus in Montgomery, Alabama. Parks was a member of the National Association for the Advancement of Colored People, an African American civil rights organization founded in 1909. She was arrested and thrown in jail for failing to follow the bus driver’s instructions. Parks was found guilty and fined. Her actions ignited the Montgomery Bus Boycott led by an African American clergyman, Rev. Dr. Martin Luther King, Jr. (1929–68). African Americans boycotted buses and organized car pools. The Montgomery Bus Boycott attracted national attention and established King as the leader of the American Civil Rights Movement. In 1963, President Lyndon Johnson (1908–73) spearheaded the Civil Rights Act of 1964 and the Voting Rights Act of 1965, outlawing discriminatory voting practice. CK
1955
Cryptozoology
Bernard Heuvelmans
A pseudoscience devoted to discovering lost and mythical species
Cryptozoologists allege this to be the scalp and hand of a Yeti believed to have inhabited Nepal or Tibet.
The term “cryptozoology” comes from the Greek kryptos, meaning “hidden,” and refers to the “study of hidden animals.” Cryptozoology is the search for cryptids—animals and plants that science does not recognize and whose existence relies on what at best can be called circumstantial evidence and unverifiable sightings—in a discipline that is perhaps best described as pseudoscientific. Animals that are on the cryptozoologist’s list may include extinct animals, such as dinosaurs or the flightless dodo, or animals that only exist in myth or legend, creatures such as the unicorn, griffin, Abominable Snowman (Yeti), and Loch Ness monster. Even proponents of cryptozoology acknowledge that evidence for almost everything on their list of creatures is weak.
“Until critical thought and good research are commonplace … it will remain disrespected.”
Ben Roesch, zoologist
The father of cryptozoology is Bernard Heuvelmans (1916–2001), whose book On the Track of Unknown Animals (1955) referred to the comparatively recent rediscovery of species thought to be extinct (at least by Europeans), such as the giant panda and the pygmy chimpanzee. Heuvelmans was often criticized for his belief in cryptids, especially after his attempt in the late 1960s to classify the so-called Minnesota Iceman, later revealed to be a hoax, as a new species of hominid, Homo pongoides. It is the nature of cryptozoologists, however, to become excited on spurious evidence. The carcass of a Megalodon, an extinct shark of the Cenozoic era, turned out to be that of a basking shark, but a species of sauropod dinosaur believed to live in the jungles of the Democratic Republic of Congo still remains elusive. The search goes on … BS
1956
Bloom’s Taxonomy
Benjamin Bloom and others
The classification of education and teaching into three domains of learning
In 1948, Dr. Benjamin Bloom (1913–99) met with other college and university examiners at the American Psychological Association annual meeting; this group of educators ultimately produced the Taxonomy of Educational Objectives, The Classification of Educational Goals, Handbook I in 1956. The hierarchical classification system in this handbook is commonly known as “Bloom’s Taxonomy” and, more than fifty years after its publication, it is still used in teaching, curriculum writing, and learning theory disciplines. Bloom’s Taxonomy is designed to focus educators on all three learning domains—cognitive, affective, and psychomotor—to create a more holistic form of education.
The cognitive domain involves knowledge and the development of intellectual skills. This includes the recall or recognition of specific facts, procedural patterns, and concepts that serve in the development of intellectual abilities and skills. There are six major categories within the domain, which can be thought of as degrees of difficulty that must be mastered progressively. The affective domain addresses people’s feelings, values, appreciation, enthusiasms, motivations, and attitudes. The psychomotor domain includes physical movement, coordination, and use of the motor-skill areas to achieve speed, precision, distance, procedures, or techniques in execution.
“What we are classifying is the intended behavior of students—to act, think, or feel.”
Dr. Benjamin Bloom
Today, Bloom’s Taxonomy is used to develop learning objectives. This is essential for teachers to plan and deliver instruction; design valid assessment tasks and strategies; and to ensure that instruction and assessment are aligned with the learning objectives. BC
1956
Mind-brain Identity Theory
U. T. Place
A materialist theory proposing that mental states are identical to brain states
Mental states may consist of sensations, beliefs, desires, and other more complex thought patterns and the mind-brain identity theory attempts to explain these phenomena in materialist (or physical) terms. The earliest modern references to this theory appeared in 1933 by psychologist E. G. Boring (1886–1968). But its first explicit formulation was in 1956 in a journal article by philosopher and psychologist U. T. Place (1924–2000), whose ideas were further developed by philosopher J. J. C. Smart (1920–2012) in the 1950s.
“[In science,] organisms are able to be seen as physicochemical mechanisms.”
J. J. C. Smart, Philosophical Review (April 1959)
Attempts to explain the mind in physical terms can be seen as a response to the dualism that emerged from René Descartes’s philosophy in the seventeenth century. Place’s and Smart’s versions were both originally limited to arguing that all mental sensations are reducible to physical correlates in the brain. For example, the experience of pain is correlated with a certain type of neural activity. However, this version of the identity theory could not adequately account for the fact that many sensations, such as pain, are specific mental events and therefore should be identical with equally specific neural activity. In order to resolve this, versions of the theory propose that even if similar mental states like pain may be exhibited in changeable brain states, this only shows that many different neurons may be more or less involved in the formation of any particular mental state. Sophisticated brain-imaging technologies and recent discoveries about the plasticity of the brain are rapidly advancing to the point at which neural correlates for the most individualized mental states may be discoverable. TJ
1956
Cognitivism
H. Simon, N. Chomsky, and M. Minsky
A response to behaviorism whereby mental states are real, and can be evaluated
U.S. writer, educator, linguist, and proponent of cognitivism, Noam Chomsky, at home in Boston in 1972.
Cognition involves problem solving, decision making, memory, attention, comprehending linguistics, and our ability to process information. Behaviorists acknowledged the reality of cognitive thought but considered it only as a form of behavior. Cognitivism, however, rejected what it considered the simplistic “cause and effect” approach of behaviorism, arguing that behavior is largely determined by the way we think, and to study this is crucial to the understanding of psychology—and ourselves.
An early pioneer of cognitivism was the Swiss psychologist Jean Piaget (1896–1980), whose research on cognitive child development showed children’s brains to have a far more refined mental structure than psychology had previously assumed. Piaget believed cognitive development to be a maturing of mental processes as a result of our biological development. He showed that the “hidden black box of the mind” can be opened far earlier in life than anyone else thought possible, and that people are not the “programmed animals” of the behaviorists, capable of little more than responding to external stimuli. Our actions, say the cognitivists, are a consequence of our thoughts. At a meeting in 1956, Herbert Simon (1916–2001), Noam Chomsky (b. 1928), and Marvin Minsky (b. 1927) set the guidelines for the development of cognitive science.
“ … certain aspects of our knowledge and understanding are innate …”
Noam Chomsky
Cognitive Behavioral Therapy, a recent amalgam of cognitivists and behavioralists, see mental disorders as the result of distorted outlooks (such as life being hopeless), perceptions that can be corrected if patients recognize and correct their errors in thinking. BS
1957
Objectivism
Ayn Rand
A novel philosophy disseminated by a provocative novelist
Ayn Rand at the National Book Awards in 1958. Her magnum opus, Atlas Shrugged (1957), encompassed her ideas of rationalism, individualism, and capitalism within a dystopian United States.
Frustrated with her intellectual climate, novelist and lay philosopher Ayn Rand (1905–82) collected ideas from a variety of philosophers and cobbled them into a unique view that she named Objectivism. She expounded on this personal worldview in her novel Atlas Shrugged, published in 1957. Rand defends Aristotle’s ideas that reality exists objectively and authoritatively, that the laws of logic guide our understanding, and that consciousness is the seat of humans’ ability to know. She defends the rationalist ideas that morality is objective and that conscious rationality bestows special moral significance. And she also defends the classical liberal idea that each person is obligated to respect every person’s right to pursue her interests, so long as those pursuits do not interfere with another person’s right to do so. Rand then argues that the only sociopolitical system consistent with these ideas is laissez-faire capitalism, that is, a free market economy.
“1. ‘Nature, to be commanded, must be obeyed’ or ‘Wishing won’t make it so.’
2. ‘You can’t eat your cake and have it, too.’
3. ‘Man is an end in himself.’
4. ‘Give me liberty or give me death.’”
Ayn Rand, sales conference at Random House, 1962
One controversial implication is what Rand calls “the virtue of selfishness.” Since each person is intrinsically valuable, one’s primary moral obligation is to pursue one’s own interests. This pursuit is limited only by the recognition that others are also valuable, and thus no one has the right to deceive or coerce others. Selfish interests cannot conflict because it cannot be in our interests to have something to which we have no right. Although some goods may result from collective action, such goods never justify the use of force.
Rand’s philosophy continues to spark controversy, especially among those who argue that some “social goods” cannot be achieved by individuals and that unacceptable economic inequalities result from unregulated trade. Though not all capitalists would call themselves Objectivists, many cite Rand as a formative influence, including economist Walter Williams (b. 1936) and politician Ron Paul (b. 1935). TJ
1957
Mythologies
Roland Barthes
A philosophical account of how social values develop mythical meanings
Hertereau and Jourdan’s wine poster (c. 1958) exemplifies Barthes’s notion of the mythology surrounding wine’s health and wellness properties.
Divided into two sections, Roland Barthes’s (1915–80) Mythologies (1957) explains how commonly accepted social values are constructed into myths. The first section is a collection of essays, most of which were originally published in the Parisian magazine Les Lettres Nouvelles. Many of the essays use examples of reports and advertisements in newspapers and popular magazines in which certain words and images are used to convey stereotyped meanings—especially those associated with French middle-class values. Barthes discusses how wine, for example, is typically associated with all that is good in French society. But this creates a mythology that disguises the harmful implications of wine consumption and production. In the second section of the book, Barthes argues that since myths are essentially constructed through language, they are best analyzed by the methods used in the field of semiotics. By showing how words function as signs that can potentially carry many different meanings, semiotics can reveal the process that causes a particular symbolic meaning to become mythologized. Barthes then outlines seven main rhetorical techniques that are often used to construct and preserve these myths.
“Myth does not deny things, on the contrary, its function is to talk about them; simply, it purifies them, it makes them innocent.”
Roland Barthes, Mythologies (1957)
In describing his use of semiotics to expose the construction of various cultural myths, Barthes refers to the theory developed by linguist Ferdinand de Saussure (1857–1913). While adopting de Saussure’s method, much of Barthes’s analysis of middle-class myths is guided by a Marxist interpretation of consumer values. Likewise, he also notes how this method has much in common with that of psychoanalysis.
Ironically, Barthes’s demystification of the use of myths to sell consumer products has given the advertising industry a firm foundation for their persuasive techniques. Nevertheless, the book’s ideas have been just as seminal in setting the foundations for even more influential theories of social criticism. TJ
1957
Tipping Point
Morton Grodzins
The critical point of accumulation that may cause a change, with major consequences
A tipping point describes an event where the concentration of certain elements in a system reaches a level that triggers a major change. It can be applied to the behavior of virtually any phenomenon in any field. Its effects can be sudden and quickly reversible, such as when a canoe capsizes. But it is more commonly associated with effects that are more durable, such as when the outbreak of a disease reaches a point at which an epidemic inevitably follows. The term was first used by political scientist Morton Grodzins (1917–64) in 1957 to describe the situation in urban areas of the United States where the black population would grow to a point that caused an exodus of whites.
While Grodzins used “tipping” and “tip point,” the concept was already used by urban planners to describe this same sociological process. This metaphorical tipping point is derived from its literal meaning in physics, where a substance is tipped over, or into, a new state. That sense of disequilibrium is also captured in the metaphor’s application to economics and to processes that are subject to cumulative feedback mechanisms, such as climate change.
“ … a place where radical change is more than a possibility. It is … a certainty.”
Malcolm Gladwell, The Tipping Point (2000)
More recently, author Malcolm Gladwell (b. 1963) recounted how the idea has been extremely fertile, especially in understanding sociological phenomena. He first noticed the term in its application to epidemiology, in which a large number of people contagiously adopt a type of behavior. In this sense, the idea of the tipping point has already reached a tipping point, and has proven itself a powerful means of reinterpreting everything from weather patterns to consumer habits. TJ
1957
Happening
Allan Kaprow
An artistic interpretation of the everyday, which gave audiences and art new life
Allan Kaprow uses a lawnmower to shred paper before musicians and an audience on September 8, 1964.
In the mind of Allan Kaprow (1927–2006), the U.S. painter, assembler of three-dimensional collages, and pioneer of performance art, a “happening” was “a game, an adventure, a number of activities engaged in by participants for the sake of playing.” They were, he said, “events that, put simply, happen.”
Kaprow may have coined the term in 1957 while attending some improvisational performances at the farm of sculptor George Segal (1924–2000), but what he witnessed had its roots in the challenge of the Futurists and Dadaists of the 1920s to traditional notions of what constitutes art, and how it is exhibited. Kaprow took the idea and applied it to his own art, which at first involved small scripted events where he encouraged his audience to make their own connections between themes and ideas. It soon grew to encompass many avant-garde artists who all brought their own perspectives and individual agendas. They were confronting the barrier between artist and audience by emphasizing the importance and involvement of the viewers, who would invariably be asked to add their own element so that no two acts were ever the same. They provoked interaction; they were “one-offs,” temporary and ephemeral.
“The artist will discover out of ordinary things the meaning of ordinariness.”
Allan Kaprow
The movement peaked in 1963 with the Yam Festival held at Segal’s farm, but declined in popularity thereafter. Less theatrical than the performance art with which it was often confused, happenings made artists out of all us, and did it in our world—on a farm, in a street, or an abandoned building. Art no longer was brought to the people; it was the people. BS
1957
Generative Grammar
Noam Chomsky
Rules determining sentence construction by speakers of a particular language
Chomsky argued for an “innate” universal grammar rather than one learned from the environment.
In linguistics, generative grammar refers to the set of rules that produce sentences in a language. These rules of syntax determine how sentences are structured. For example, the English language is structured by rules specifying word order, as with the adjective “black” preceding the noun “cat.” But other rules specify changes to the structure of words themselves such as the “ed” commonly added to a verb to indicate past tense. Noam Chomsky (b. 1928) first proposed a theory of generative grammar in 1957 and developed other versions that have come to dominate the field.
The original version—transformational grammar—was the idea that sentences are generated by formal rules that transform their structure. Before Chomsky’s theories, the study of generative grammar had been largely concerned with providing a description of the generative rules specific to a particular language. The first systematic description of generative grammar was undertaken in the sixth century BCE by the Sanskrit grammarian Pānini. Central to Chomsky’s theories was the much more ambitious and controversial idea that these generative grammars in different languages were the expression of what he called a universal grammar. This was the hypothesis that there must be inherited constraints on the range of possible grammars.
Many thinkers from centuries before Chomsky had noted the commonalities in the grammars of different languages and reasoned that they must be innate. However, Chomsky’s theory can be seen as a response to theories such as behaviorism that had gained prominence in the 1950s and attempted to explain language learning as the product of repeated exposure to words and sounds in different contexts. While controversial, Chomsky’s notion of a universal grammar quickly became dominant in understanding generative grammar, while also providing a source of insight into the human mind itself. TJ
1957
Alien Abduction
Antonio Boas
An unexplained phenomenon that first arose in the science fiction obsessed 1950s
Gene Barry, Ann Robinson, and townspeople approach a spacecraft in the film The War of the Worlds (1953).
UFO sightings have been recorded since antiquity, but the first notable account of a human being abducted by aliens did not appear until 1957. Antonio Boas, a Brazilian farmer, claimed to have been taken aboard a spacecraft and coerced into having sex with a female humanoid creature, before being released.
The best-known tale of alien abduction, however, was revealed under hypnosis by Betty and Barney Hill, a married couple returning home after a holiday in Canada late in the evening on September 19, 1961. While driving along a heavily wooded road in central New Hampshire, they noticed a light in the sky that seemed to follow them for miles then landed on the road in front of them. They were taken aboard an alien spacecraft, shown a star map detailing star patterns not observable from Earth, and physically examined before being released. There was not a shred of corroborating evidence to support their story, and the star map, which Betty later drew from memory, represented only random stars whose pattern and positions could easily be replicated by looking elsewhere in the universe.
Stories of contact with otherwordly beings (such as angels) have appeared throughout history and many cultures have believed in people being transported to other dimensions, so tales of alien abduction can perhaps be seen as a modern manifestation of this tradition. Scientifically, there is debate about whether abduction experiences relate to real physical events, psychological interaction, altered states of consciousness, or simply delusional fantasy. According to U.S. psychologist Stuart Appelle in his essay: “The Abduction Experience: A Critical Evaluation of Theory and Evidence” (1996), alien abductions are “subjectively real memories of being taken secretly against one’s will by apparently nonhuman entities.” The accounts of abductees are, for them, undoubtedly very real. But memories and recollections fall well short of proof. BS
1957
Rational Ignorance
Anthony Downs
Ignorance is rational when the effort to acquire knowledge exceeds its benefit
Rational ignorance is often cited as a cause of voter apathy.
According to the theory of rational ignorance, people can only spend so much of their day learning new information and so will ignore learning about that which will provide little reward. The cost-benefit analysis that individuals make when deciding what to learn about explains why so many people choose to remain ignorant about seemingly important topics.
Anthony Downs (b. 1930) coined the term “rational ignorance” in his book An Economic Theory of Democracy (1957). The phrase was used to explain why many voters knew very little about issues that seemed extremely important, such as nuclear proliferation. The threat of potential nuclear war affects everyone, but this does not mean that people will be interested in learning more about it. Even in the face of public education programs or widely available information, people generally choose not to learn about important issues because there is little chance that their efforts will lead to a practical, personal benefit.
“ … it is individually irrational to be well-informed …”
A. Downs, An Economic Theory of Democracy (1957)
The economy is on shaky ground, political leaders are corrupt, and your roof has sprung a leak over your bed. Which problem will you devote hours of precious free time to investigating and trying to solve? The theory of rational ignorance knows the answer, and so do you. No matter how important something may seem, even the smallest problem is magnified when it directly affects you. It is why you spend time worrying about bills and your children instead of voting or learning about the super-volcano nearby that, though it has a small chance of erupting at some point within the next 100,000 years, will kill everyone you know. MT
1957
Many Worlds Theory
Hugh Everett III
Our universe is only one of an infinite number of different universes
The many worlds theory, or many worlds interpretation, states that every quantum event that could result in two or more possible outcomes results in all outcomes taking place, each in a separate universe. All possible outcomes of every quantum event have effectively taken place, and all possible alternatives throughout history have actually occurred in some other universe.
At the quantum level, it is impossible to observe a particle without affecting its behavior, nor can we ever really be certain what a particle is doing at any given point. Quantum physicists can only predict the behavior of a particle by saying how likely an outcome is. This quantum uncertainty led physicist Niels Bohr (1885–1962) to propose that all particles exist in all states at the same time, an idea known as the Copenhagen interpretation. However, physicist Hugh Everett III (1930–82) thought otherwise. For Everett, the observed event is the only one that happened in this universe. All the other possible outcomes also happened, but each in their own separate universe.
“[If an] amoeba had the same memories as its parent, it has not a life line, but a life tree.”
Hugh Everett III
The idea of parallel dimensions or universes has long been a popular topic for fiction writers, but the many worlds theory was not intended as fiction. As an attempt to explain why the quantum world functions as it does, many worlds theory is one of several competitors, albeit with perhaps the most striking possibilities. The idea that every possible outcome of the choices you have made has, in reality, happened or taken place in a parallel universe is aweinspiring and disturbing at the same time. Or perhaps aweinspiring in this universe and disturbing in another. MT
1958
The Idea of Freedom
Mortimer J. Adler
A historical investigation of the major philosophical views on the meaning of freedom
Mortimer J. Adler (pictured in 1977) used a dialectical approach to examine the many philosophical avenues that the concept of freedom has taken over the years.
In 1952, Mortimer J. Adler (1902–2001) and his colleagues from the Institute of Philosophical Research began surveying the vast literature on the philosophical conceptions of freedom and attempted to distill the findings into some basic categories. In 1958, the results were published in The Idea of Freedom: A Dialectical Examination of the Conceptions of Freedom. This exhaustive research showed that the concept of freedom has essentially been understood as taking three distinct forms: circumstantial, acquired, and natural. However, when these more abstract conceptions are understood in terms of personal abilities, they reflect those of self-realization, self-perfection, and self-determination. The circumstantial freedom of self-realization consists in the person’s ability to do as they wish, as far as circumstances allow; the acquired freedom of self-perfection involves learning how to act in accordance with some personal or moral ideal; the natural freedom of self-determination simply refers to the person’s ability to control their own behavior. As many thinkers have also conceived freedom to have political and collective dimensions, Adler also regarded these as distinct conceptions—political freedom being an extension of the circumstantial freedom of self-realization; collective freedom as the social expression of the acquired freedom of self-perfection.
“A man is free who has in himself the ability or power whereby he can make what he does his own action and what he achieves his own property …”
Mortimer J. Adler, The Idea of Freedom (1958)
Philosophical attempts to describe the meaning of freedom have a long and varied history. Despite the many disagreements, Adler regarded philosophical inquiry itself as a historical dialog in which philosophers respond to the challenges raised in the work of other thinkers, past and present. As a result of this dialectical process, Adler believed that many of those philosophical notions would in fact come to agreement in certain respects. The research demonstrates how a dialectical view of an abstract concept can show how it has come to be understood. TJ
1958
Two Concepts of Liberty
Isaiah Berlin
A vital distinction in understanding the political and personal value of freedom
The dual concepts of liberty proposed by Isaiah Berlin (pictured in 1992) differed according to whether the individual was allowed autonomy and the freedom of self-development.
Two Concepts of Liberty was originally composed as a lecture given at Oxford University in 1958 by the philosopher and political theorist Isaiah Berlin (1909–97). Berlin introduced the idea that liberty is nothing but the absence of constraints on personal action (negative liberty). This generally rules out any interference or coercion of citizens by political authorities, except for laws minimally required to defend that personal liberty. In contrast, positive liberty is expressed in the idea that people are not genuinely free unless they have the power to pursue their goals. Both concepts therefore agree that liberty requires freedom from unnecessary political and social constraints, but the positive version regards personal autonomy or self-determination as essential to the realization of that freedom.
“The fundamental sense of freedom is freedom from chains, from imprisonment, from enslavement by others. The rest is extension of this sense, or else metaphor.”
Isaiah Berlin, Four Essays on Liberty (1969)
Like many philosophical ideas, these two distinct interpretations of liberty had already been recognized by ancient Greek philosophers. However, Berlin’s interest in the history of ideas enabled him to clearly observe how both concepts had developed as political ideals that would often come into conflict. Although the positive notion of liberty sought to enhance the opportunities for personal autonomy, if the concept was zealously adopted as a collective ideal it could easily have the opposite effect. The positive concept was most strongly promoted by Enlightenment philosopher Jean-Jacques Rousseau and was central to the ideology that inspired the French Revolution (1788–99).
As happened in the aftermath of that revolution and many since, the positive concept has often been used by totalitarian regimes to justify severe restrictions on personal liberty in order to realize goals identified with the common good of all citizens. Although liberal democracies are more firmly guided by an ideal of negative liberty, the positive concept is often represented in paternalistic laws governing health and safety. TJ
1958
The Great Leap Forward
Mao Zedong
An attempt by China to rapidly modernize its industrial and agricultural production
The Great Leap Forward was a plan by China’s communist government to accelerate the country’s industrialization as part of its goal to become a self-sustaining economy. The scheme was devised by the founding father and leader of the communist government, Mao Zedong (1893–1976), and began in 1958. Industrialization was usually a process of gradual development made possible by capital investment in heavy machinery and new technologies. The Great Leap Forward was the ill-conceived idea that this gradual process could be skipped by developing numerous small-scale industries and increasing agricultural production at the same time.
“ … corpses often ended up in shallow graves or simply by the roadside.”
Frank Dikötter, Mao’s Great Famine (2010)
The scheme began as a five-year plan and was conceived partly as a result of the apparent success of the government’s initial five-year plan to build large-scale plants to increase the production of commodities such as coal, iron, and steel that would reduce the country’s dependence on agriculture. That plan had succeeded with the aid of funds and technical expertise provided by the Soviet Union. Many farms had already been voluntarily organized into collectives as part of the first five-year plan, and under the Great Leap Forward collectives became large communes that now also included small-scale industries. Within a year, the Great Leap Forward had proven disastrous. With much of the labor force now diverted to work on these small industrial projects, agriculture was unable to supply enough food, and famine was the inevitable result. The idea was finally abandoned in 1961 only after the famine caused 20 to 40 million deaths. TJ
1958
Meritocracy
Michael Young
A system that rewards individuals on the basis of their achievements or abilities
Exemplary meritocracy: Pakistani students participate in the national test for university admission in 2012.
Any society that rewards individuals on the basis of certain desired talents may be described as a meritocracy. As a political principle, it applies to any system of government in which candidates are chosen for such qualities as general intelligence or diligence. Insofar as business organizations generally select and reward staff for their skills and achievements, they too are meritocracies. The term itself was coined in an ironic sense by British sociologist Michael Young (1915–2002), in his satire The Rise of the Meritocracy, 1870–2033 (1958).
Meritocracy has become an established principle governing most modern democratic societies, and its popularity stems from the fact that it permits equality of opportunity. It rewards people for their own efforts and does not discriminate against anyone because of their social status. Before the Age of Enlightenment it was a foreign concept in the history of European societies until the idea reached the British through contact with Confucianism. Confucius’s philosophy placed a high value on a person’s moral character rather than inherited advantages, so it was a laudable replacement for the old aristocratic European regimes. However, in The Rise of Meritocracy, Young described how this original ideal of merit had become corrupted, reduced to “intelligence plus effort,” thereby creating a new social class that discriminated against anyone deemed to lack these capacities.
“It is hard in a society that makes so much of merit to be judged as having none.”
Michael Young, The Guardian (June 29, 2001)
With its emphasis on equal opportunity, meritocracy has given modern democracies a strong moral basis. However, forms of social discrimination can be seen in practices such as psychometric tests used to hire staff. TJ
c. 1960
New Age
Various communities
A spiritual movement that draws on both Eastern and Western spiritual and metaphysical traditions in search of harmony and enlightenment
Nude couples on a balcony at the Esalen Institute in northern California in 1970. This residential community and retreat center still promotes a New Age philosophy today.
The phrase “New Age” was first used by William Blake in 1809 to describe what he saw as an approaching era of spiritual and artistic awareness. Elements of what we would recognize today as New Age continued to accumulate throughout the remainder of the nineteenth century, such as theosophy and various forms of spiritualism. In the 1960s the spiritual pioneers who established the Findhorn Community in Scotland and the Esalen Institute in California would, over many years, evolve into the very first contemporary New Age communities, helping people to find the “Christ within” or each person’s “human potentialities” and culminating in the establishment of ongoing foundations to promote self-awareness.
“Life is not measured by the number of breaths we take, but by the moments that take our breath away.”
George Carlin, comedian
To some, the New Age refers to the upcoming astrological age of Aquarius. Some New Age followers may believe in tarot cards, psychic readings, and other various occult practices; others might consult astrology for insights into their future, or crystals for healing; while still others seek the more benign pursuits of yoga and meditation. There is no obvious hierarchical structure within the New Age movement and no immutable doctrines, although in the 1970s the work of people such as David Spangler (b. 1945), the self-professed spiritual philosopher and clairvoyant, brought a sense of cohesion and provided hope for those who could not find answers to their questions in Christianity or Secular Humanism.
Fundamentally aligned to Hinduism, “New Age” covers a variety of groups who all share a common quest for harmony, enlightenment, a deeper sense of spiritual awareness, and personal healing and growth. It is not a cult, nor is it a religion. It has no recognized beginning or any particular founder. Rather it is a loose amalgam of worldviews, a philosophical approach to the world that is difficult to define. BS
c. 1960
Political Correctness
United States and United Kingdom
Using language that avoids potentially offensive or discriminatory terms
A person who is politically correct avoids using terms that hold potentially negative or offensive implications. The term, often abbreviated as “PC,” can also mean policies that contain or rely upon such language, and behaviors or beliefs that are offensive, or potentially offensive, or presumptively exclusionary. For example, political correctness would replace the term “chairman” with “chairperson,” and use “person with a disability” instead of “handicapped.”
The term “political correctness” originated around the time of the Russian Revolution in 1917, when those who professed beliefs identical to those of the Communist Party were said to be politically correct in their thinking. The modern concept of political correctness, however, arose during the 1960s and 1970s. At that time, certain political activists who advocated for broad social reforms in the United States and the United Kingdom—a group broadly defined as the “new left”—began using the term to describe language that avoided having offensive connotations against groups that had been traditionally discriminated against. In the 1980s and 1990s many people on the political right began voicing widespread opposition to the PC movement. Today, the term is often used derisively to refer to any attempt to impose limitations on speech, belief, or thought.
Proponents of political correctness argue that changing language to be inoffensive to those who have traditionally been discriminated against is laudable, while opponents see the attempt to control language as a thinly veiled political strategy by those seeking to assert their own dominance over others. Yet despite the debate over its use, political correctness has been responsible for a significant change in language, behavior, and beliefs. Not only are some nonpolitically correct terms no longer widely used, but those who choose to use them can often be subject to condemnation. MT
1961
Madness and Civilization
Michel Foucault
A seminal French work on how “madness” is a social construct
Michel Foucault (1926–84), a philosopher and clinical psychiatrist, argued in his first major work, Madness and Civilization (1961), that the Middle Ages accepted madness and visible deformity as a part of life. Madness was respected, even if it was also feared. This changed dramatically with the “Great Confinement,” a movement of the seventeenth century that saw all sorts of people deemed “undesirable,” as without reason and subject to detention by the state, physically chained to walls and floors. By the eighteenth century, lunatics were no longer chained but free to move about the asylum, now under the care of psychiatrists.
In the “Great Confinement” was the origin of modern notions of mental illness as the absence of reason and the beginnings of the psychiatric profession’s scientific inquiry into its causes. The insane were no longer a part of everyday life; they were shut away, rendered invisible from society. Worse still, they were unwilling objects of probing scientific investigation. As civilization grew more complex, bureaucratic, and scientific, more types of behavior were considered unreasonable. Though psychiatrists professed to aid the insane, they were in fact imposing an insidious kind of social control.
“Tamed, madness preserves all the appearances of its reign.”
Michel Foucault, Madness and Civilization (1961)
For Foucault, there was no reality to mental illness; it was a social construct, a way for society to punish outlandish or odd behavior. His work was a boon to the 1960s counterculture, particularly to gay rights activists protesting the diagnosis of homosexuality as a mental disorder, and also to critics of psychiatry. To this day, psychologists assail Foucault’s skepticism of the reality of mental illness, particularly schizophrenia. CRD
1961
Liberating Violence
Frantz Fanon
Colonized people must use violent means to achieve liberation from a colonial power
An Algerian celebrates his country’s independence from French rule in 1962.
In The Wretched of the Earth (1961), Frantz Fanon (1925–61) argues for the necessity of violence as the only means that can successfully liberate natives from colonial oppression. As this oppression is both physical and psychological, only complete liberation can restore independence. Without the use of violence, natives will remain dehumanized. Fanon essentially argues that the violence used by the colonizers only forces the natives themselves to resort to violent means of liberation.
As a native of the French colony Martinique, Fanon witnessed the abusive treatment of the native population by French soldiers stationed there during World War II (1939–45). He also voluntarily served with the French army and after qualifying in France as a psychiatrist, he wrote Black Skin, White Masks (1952)—a book that attempts to explain the sense of alienation felt by colonized natives. This led Fanon to confront the question of how a repressed and dispossessed native population could recover ownership and control over their nation. This diagnosis of alienation and the call for revolution were partly influenced by Marxist theory, but Fanon rejected the Marxist idea that an educated awareness of social class was needed to inspire native groups to revolt. He argued instead that those who are most motivated to lead such a violent revolt are the peasants, who are least dependent on the colonizers and therefore have no incentive to make compromises. Fanon’s call for violent liberation has influenced many anti-colonial movements, and his understanding of racial and social oppression has also inspired its victims with the moral courage needed to assert their humanity. TJ
“The starving peasant is the first of the exploited to discover that violence pays …”
Frantz Fanon, The Wretched of the Earth (1961)
1961
Rational Choice Theory
George Homans
A sociologist adapts economic theory to study human social interaction
Just as economic theories can play a significant role in studying the complexities of how the production and consumption of goods and services drives a nation’s economy, so too can a set of principles—according to rational choice theory or RCT—be used to comprehend the elements of human to human interactions, to try and predict their intentional, goal-oriented behavior. The man who invented the theory, the U.S. sociologist George Homans (1910–89), based it upon the principles of behavioral psychology and deeply believed that psychology could provide explanations for sociological phenomena. He was particularly interested in reciprocal behavior and argued that new theories were not required to study this social interaction; all that was needed was to modify existing behavioral principles without forgetting that sociology always begins its analysis with the behavior, not of groups or institutions or structures, but of individuals.
“Human behavior … is not free but shaped by rewards and punishments.”
John Scott, sociologist
RCT involves attempting to understand why and how we act to maximize rewards and weigh up costs and benefits before deciding on an action’s expediency, the giving-up of individual control to a collective or group, and how we employ strategies designed to maintain our control over resources.
Homans followed the behavioralist model of social interaction that says we only enter into arrangements with others after already calculating possible rewards as opposed to the estimated costs of our actions. His theory is based on rationality, arguing that all our social actions are the result of rational choices, however irrational they may appear to be. BS
1961
The Drake Equation
Frank Drake
Estimating the number of detectable extraterrestrial cultures in the Milky Way
In 1961, astronomer and astrophysicist Frank Drake (b. 1930) devised an equation that was intended to guide the search for extraterrestrial intelligence by estimating the likely number of detectable civilizations within our own Milky Way galaxy.
The equation is: N = R* fp ne fl fi fc L, where the number of detectable civilizations, N, is the product of seven specific factors: R* is the yearly rate of star formation in our galaxy; fp is the fraction of those stars that have planetary systems; ne is the number of habitable planets in each planetary system; fl is the fraction of habitable planets that actually develop life; fi is the fraction of those life-bearing planets where intelligence develops; fc is the fraction of civilizations that develop detectable interstellar communication technology; and L is the length of time that civilizations use such communication technology.
“[P]robably only one in ten million stars has a detectable signal.”
Frank Drake, Cosmos Magazine (2010)
A decade before Drake’s equation, physicist Enrico Fermi wondered why intelligent life had not been detected. Given the age of the universe, even if only a small fraction of planets were capable of communication, we should have evidence of it by now.
Though not formulated in response to Fermi, the idea that radio signals would be the most promising means of detecting intelligent life enabled Drake to see that communicative ability was a crucial factor in setting limits to the number of planets with potential civilizations. The two factors involving the development and continuing use of interstellar communications gave the equation a potentially fruitful way of answering Fermi’s paradox. TJ
1961
Milgram Experiments
Stanley Milgram
Experiments in social and moral psychology testing obedience to authority
The Milgram experiments were designed to investigate the extent to which individuals are willing to obey authority when instructed to perform apparently harmful acts against others. In the original and best-known experiment at Yale University in 1961, psychologist Stanley Milgram (1933–84) recruited forty men to participate in a study claiming to investigate memory and learning. Each man was assigned the role of “teacher” and asked to read a set of word pairs to a “learner” (an actor) in another room. The teacher had been falsely led to believe that the learner was connected to a machine capable of delivering electric shocks. A researcher instructed the teacher to administer a shock whenever the learner gave a wrong answer. The teacher was told to increase the strength of each shock until the learner gave the correct answer. While some refused to continue after the learner began to shout in pain, 65 percent administered the strongest shock of 450 volts.
“ … many subjects did, indeed, choose to reject the experimenter’s commands …”
Stanley Milgram
Milgram’s experiments on obedience to authority emerged from his earlier studies on social conformity, but he was particularly troubled by the evidence at the Nuremberg trials that showed how willingly Nazi officers had committed atrocities and attempted to avoid any moral responsibility on the grounds that they were dutifully following the orders of their superiors. The original experiment raised many ethical objections. But variations of the experiment and attempts to replicate it decades later have consistently validated those initial findings, which show the moral conscience to be remarkably vulnerable to manipulation by authorities. TJ
1961
The Genesis Flood
John C. Whitcomb and Henry M. Morris
Revival of the creationist view that geological history is the result of a flood
Published in 1961, The Genesis Flood: The Biblical Record and its Scientific Implications proposed that geological history is best interpreted as resulting from the global flood described in Genesis 6 : 9. Young Earth creationists John C. Whitcomb (b. 1924) and Henry M. Morris (1918–2006) believed that the Book of Genesis provides a literal, factual, truthful account of Earth’s creation and formation. By this interpretation Earth must be only about 6,000 to 10,000 years old, with the flood occurring less than 2,000 years later. To explain how this single deluge could be responsible for the layered fossil record, the authors contend that animals with the same bodily structures, behaviors, and local environments would be spread into the same locations by the action of the floodwaters. Also, the more mobile or buoyant would naturally be the last to survive the rising waters.
“The evidence for divine inspiration is far weightier than the evidence for science.”
John C. Whitcomb and Henry M. Morris
Early Christian thinkers had first spread the idea that the Genesis flood accounted for the range of fossils. However, once it became the subject of scientific inquiry in the eighteenth century, it became clear that Earth was formed not thousands but billions of years ago, and the depth and diversity of fossil layers showed they were built up long before humans appeared.
Modern geological science now provides a full account of the forces and events that produced the different fossil layers. Nevertheless, as creationism has since attempted to gain more respectability in the guise of Intelligent Design, the ideas in The Genesis Flood made it a best seller that retains its importance for fundamentalist Christians. TJ
1961
Military-industrial Complex
Dwight D. Eisenhower
Relationship between producers of military goods and the officials who pay for them
As technology has become increasingly complicated, and modern weapons have become reliant upon large industries to produce them, the costs of military equipment have risen and given rise to an entire industry dedicated to providing governments with the materials. U.S. president Dwight D. Eisenhower (1890–1969) referred to the relationship between the military equipment producers and the government officials responsible for purchasing them as the military-industrial complex. The phrase describes the interconnected, and potentially corrupting, relationship between manufacturers and those in government who have the power to determine what the military needs and how much the government will spend to obtain it.
Humanity has engaged in warfare since before recorded history, but prior to the industrial age it was relatively rare for a nation to maintain a large standing army during peacetime. Serving as the supreme allied commander in Europe during World War II and as the president of the United States from 1953 to 1961, Eisenhower had extensive experience and knowledge about the relationship between the government and equipment manufacturers. He first used the phrase “military-industrial complex” in his farewell address to the nation in 1961. He warned that the size and scope of the nation’s defense budget, and those interested in maintaining it, posed a distinct threat to the nation.
The idea that military material producers and government officials have a symbiotic relationship that benefits their own needs, and not necessarily those of the nation, has ingrained itself in popular U.S. discourse, and also spread to other nations. That idea may have also helped shape opinions about how large corporations—and not merely those involved in defense spending—depend on, and seek to influence, government spending and policy decisions. MT
1962
Explanation, Reduction, and Empiricism
Paul Feyerabend
The idea of “incommensurability,” or the thought that knowledge moves by great leaps and ruptures rather than simply replacing previously held notions
Paul Feyerabend, pictured here in 1984, codified his anarchic thesis that scientific facts cannot be differentiated from myths in his seminal text, Against Method (1975).
In the paper “Explanation, Reduction, and Empiricism” (1962) Paul Feyerabend (1924–94) challenged the notion that science is the objective, rational accumulation of facts and the gradual replacement of false or inconsistent theories with better, more robust ones. Instead, theory change in science was revolutionary in the sense that the terms used to describe the phenomena and the causal laws that govern them were fundamentally altered. Feyerabend concluded, “A theory is incommensurable with another if its ontological consequences are incompatible with the ontological consequences of the latter.” This means that each subsequent theory used assumptions and vocabulary that represented a novel way of explaining and describing the fundamental characteristics of the universe, to such an extent that each subsequent manner of describing the world is mutually exclusive.
“The interpretation of an observation is determined by the theories that we use to explain what we observe, and it changes as soon as those theories change.”
Paul Feyerabend
The problem presented in the paper came about through Feyerabend’s criticism of existing accounts of theory change in science that could not account for the rapid changes in theoretical understanding that appeared to have taken place in the history of science, particularly in the physical sciences. Scientists and philosophers had held the belief that science was linear and conservative in its growth. Feyerabend countered that older theories were not merely modified but were rejected and replaced. Once a theory changed, the worldview of scientists changed, as did the ways in which they accumulate evidence for a theory.
Feyerabend’s account of theories altered the way in which we view scientific method and the increase of knowledge in science. It allows us to compare better the relative merits of a theory and also to understand the assumptions of scientists at specific moments in historical time. CRD
1962
Paradigm Shift
Thomas Kuhn
A change in how scientists approach and ask questions about a field of study
A multiple exposure portrait of historian Thomas Kuhn (1973), an exponent of scientific paradigms.
According to U.S. scientist, historian, and philosopher Thomas Kuhn (1922–96) in his book The Structure of Scientific Revolutions (1962), science does not progress in a linear fashion, building new knowledge from previously discovered truths. Instead, it proceeds in an ongoing process of periodic revolutions, in which a new way of thinking, known as a paradigm, overthrows the old way because the old way is perceived to be unable to explain problems adequately.
Kuhn focused his attention on understanding how scientific thought and knowledge progressed. In his studies of the history of science, he identified patterns in which the prevailing model, the paradigm, in a particular field would, periodically, be overturned by a new paradigm. This process is similar to the way in which political systems sometimes go through revolutions, where the old power is replaced by a new regime. This change from one way of thinking to a new way of thinking Kuhn dubbed a paradigm shift. When Copernicus (1473–1543) overturned the geocentric model with the new heliocentric model, not only did astronomers no longer think in terms of celestial spheres, but also the new questions they asked had nothing to do with the old model. This is one of the distinguishing features of a paradigm shift.
These periodic changes in the accepted paradigm portrayed a scientific process that was not, as many people had long believed, a linear progression from ignorance to knowledge. Instead, Kuhn proposed that most scientific progress was made by those who assume the validity of the current paradigm, and any inconsistencies or anomalies that arise to challenge the prevailing paradigm are viewed as erroneous or inconsequential by adherents. This view, widely held but still debated today, was radically different from the prevailing view of how science progresses. The idea of a paradigm shift was, in effect, a paradigm shift. MT
1962
Silent Spring
Rachel Carson
The alarming idea that humankind was poisoning all life with pesticides
A farmer sprays fruit trees with pesticide, a practice that was strongly criticized by Rachel Carson.
The term “silent spring” refers to the argument that uncontrolled and unexamined pesticides harm and kill animals, birds, and humans. It was argued that bird populations in the United States had declined in the 1950s, and this so-called “silencing of birds” was due to the overuse of pesticides. Humankind also was slowly being poisoned by the misuse of chemical pesticides that polluted the environment. Scientists, it was said, cannot accurately predict the long-term impact of the accumulation of chemicals on human health.
U.S. conservationist and marine biologist Rachel Carson (1907–64) began writing what came to be her book Silent Spring in 1958. It was serialized in The New Yorker in June 1962, and published as a book in September the same year. In it, she described how “chlorinated hydrocarbons and organic phosphorous insecticides altered the cellular processes of plants, animals, and, by implication, humans.”
After Silent Spring was published, there was a public outcry, and the book was instrumental in launching the U.S. environmental movement. Investigations were launched into the validity of Carson’s claims, resulting in changes to legislation regarding air, land, and water in the United States, including the banning of domestic production of DDT (a chlorinated organic insecticide) in 1972. Silent Spring remains a controversial work: its critics maintain that it stigmatizes DDT and fails to take into account DDT’s advantages in controlling the transmission of malaria by killing the mosquitoes that carry the parasite. Some people claim that the decline in the use of DDT globally has led to the deaths of many people who might not otherwise have contracted malaria. A book of critical essays outlining errors in Carson’s research, Silent Spring at 50: The False Crises of Rachel Carson, was published in 2012; it points out, for example, that bird populations were increasing in the United States at the time the work was published. CK
1962
Automobile Child Safety Seat
Leonard Rivkin and Jean Ames
An innovation in recognition of the dangers to children of automobile travel
A child in the 1960s, secure in a commercially available safety seat suitable for bench-type seating.
The invention of the child safety seat, which occurred simultaneously in the United States and the United Kingdom in 1962, denoted a shift from thinking of the automobile as simply a mode of transportation to recognizing it as a complex machine with inherent dangers for its passengers. Out of this belated recognition came the later installation of airbags, anti-lock brakes, and other auto safety measures.
From Denver, Colorado, Leonard Rivkin (b. 1926) designed a seat that restrained the child at the waist, while an Englishman, Jean Ames, produced a device consisting of a Y-shaped brace that came over the head and fell against the chest. Both had padded seats that elevated the child, and both were intended specifically to protect the child in the event of a crash.
“Car crashes are the No.1 killer of children 1 to 12 years old in the United States.”
National Highway Traffic Safety Administration (2013)
Previous to the invention of the child safety seat, infants and small children had been placed in devices that sought to restrict movement and to elevate the child, rather than protect them. Infants were sometimes placed in canvas bags that were draped over seats. This had disastrous consequences in the event of an accident because at the time seats were designed to flip forward; the infant was catapulted into the windshield of the car. Rivkin invented his safety seat in response to the widespread installation of “bucket” seats designed for a single person to occupy, in contrast to the earlier bench seating in automobiles.
Child safety seats are now a ubiquitous feature of modern motoring. In many countries, including the United States, it is obligatory for parents to promote child safety in cars by using them. CRD
1962
Personality Indicator
Katharine Briggs and Isabel Briggs Myers
A new template is created for evaluating why we act the way we do
During World War II, two U.S. women, Katharine Cook Briggs (1875–1968) and her daughter, author Isabel Briggs Myers (1897–1980), began to develop a simple questionnaire. Tens of thousands of women were entering their country’s industrialized, wartime economy, and the questionnaire was intended to help them identify jobs that would best suit their individual personalities. The initial assessment of psychological preferences was based upon the four cognitive functions outlined by Swiss psychotherapist Carl Jung (1875–1961) in his book Psychological Types (1921), namely thinking, feeling, sensing, and intuition. The Briggs’ questionnaire evolved over the years, and was eventually published, under the name of the Myers-Briggs Type Indicator (MBTI), in 1962.
“[It makes] the theory of psychological types … understandable and useful …”
Mary McCaulley, Myers-Briggs pioneer
Designed to unravel how we become aware of events and ideas and reach conclusions, the MBTI studied how our preferences grow out of the way we judge and perceive the things around us. It is made up of four key indices: extraversion–introversion—separating extraverts who focus outwardly on people and objects, from introverts whose perception is directed inwardly toward ideas and concepts; sensing–intuition—those who rely primarily on sensing observable facts, and those who intuit meanings and events outside of the mind; thinking–feeling—those who tend to think of consequences, and those who rely on feelings; and finally, judgment–perception—distinguishing between those who prefer to judge the world around them through thinking and feeling, and those who use perception, sensing, and intuiting. BS
1963
The Banality of Evil
Hannah Arendt
Acts of true evil are motivated not by hatred but by thoughtlessness
Nazi leader Adolf Eichmann stands in a prisoner’s cage during his trial for war crimes in 1961.
German-American political theorist Hannah Arendt (1906–75) first used the term “the banality of evil” in her work Eichmann in Jerusalem: A Report on the Banality of Evil (1963) to explain how a seemingly mild-mannered man, Nazi SS officer Otto Adolf Eichmann (1906–62), on trial for war crimes in Jerusalem in 1961, could have perpetrated such a monstrous crime against humanity, namely the organization of mass deportations of Jewish populations to concentration and extermination camps in Nazi-occupied Eastern Europe. For Arendt, Eichmann was neither a madman nor a monster. Rather, “The deeds were monstrous, but the doer … was quite ordinary, commonplace, and neither demonic nor monstrous.”
Arendt formulated the term “the banality of evil” as part of a wider philosophic project to explain how a society as modern and, in her opinion, as cultured and refined as Germany could instigate and carry out such a thorough plan for mass murder as Adolf Hitler’s Final Solution. Arendt decided that evil was not irrational, or born of hatred, but instead was the product of ordinary men wanting to be obedient to orders and who, above all, prized bureaucratic efficiency. What was most disturbing for Arendt was not that a man, group, or nation could think that it was necessary to exterminate all of the world’s Jews for civilization to survive, but rather that acts required to achieve this could be undertaken without a second thought.
Although Arendt only used the phrase once, at the very end of her book, it has provided a generation of readers and writers with an entrée into her complex thought. Eichmann in Jerusalem catapulted Arendt to the status of a leading public intellectual. Before Arendt, the Nazis were portrayed as bloodthirsty killers, and the world has her to thank for providing a perhaps more troubling interpretation of events: that blind obedience leads to genocide. CRD
1963
Gettier Problems
Edmund Gettier
Thought experiments that challenge our concept of what knowledge truly is
According to Gettier, viewers may believe that these are living cows in a field, but they cannot “know” that.
In his paper “Is Justified True Belief Knowledge?” (1963), U.S. philosopher Edmund Gettier (b. 1927) posed a challenge to “propositional knowledge,” the knowledge of a true statement or p (ice is solid, no lizards have wings). Gettier argued that a belief supported by evidence may not be justified as knowledge. Philosophers before Gettier would have mounted a three-fold defense of propositional knowledge: first, a person believes p; second, p must be true or it is not knowledge; third, the belief must be justified or supported by evidence and clear reasoning. If these three conditions are satisfied, then it counts as knowledge and is a “justified true belief.” Gettier would counter that by means of a “problem” or example constructed according to his arguments: an individual may claim they own a car and have papers to prove it, but they may not in fact own the car. Or, an individual may see a barnlike shape next to a highway and believe that they have seen a barn, but the barn may be only a billboard, not an actual barn.
Philosophers have put forward forms of the “Gettier problem” of what justifies true knowledge from Plato’s time onward. British philosopher Bertrand Russell (1872–1970) gave the example of an individual who believed the time was a certain hour but the clock had in fact stopped twelve hours earlier. But it was Gettier who demonstrated by simple logical analysis that belief can be mistaken in reality even when there is sufficient evidence for that belief.
Gettier’s problem, for all of the philosophic literature it has generated, still has no definitive solution. This is unsurprising, but his conclusion is unsettling since it argues that just because we believe something to be true and have evidence for it, that does not mean that it is true in reality. The problem goes to the core of how we form beliefs, how we justify them, and whether those beliefs constitute true knowledge. CRD
1963
Chaos Theory
Edward Lorenz
Chaotic behaviors of complex phenomena have their own underlying order
U.S. meteorologist Edward Lorenz (1917–2008) is widely credited with having first experimentally verified what would be known as chaos theory when he published a paper titled “Deterministic Nonperiodic Flow” in 1963. In studying computerized simulations of weather conditions, Lorenz noticed that making miniscule changes in the initial data of an experiment had a significant impact on the final outcome. Even though the initial variables were measurable and knowable, complicated systems, such as weather patterns, were not predictable beyond a certain point. Randomness or imprecise measurements led to the complicated behavior and unpredictability of the chaotic systems.
Chaos theory is a mathematical explanation for how and why complicated systems act the way they do. Chaotic systems appear random and ungoverned, but chaos theory explains these phenomena as anything but, showing how the unpredictable behavior is really the result of deterministic factors. Though the systems themselves are deterministic and follow the dictates of identifiable rules, their specific outcomes are unpredictable, and thus, chaotic.
“ … the pinball game is to chaos what the coin toss is to complete randomness …”
Flavio Lorenzelli, The Essence of Chaos (1993)
With chaos theory, systems that were once unpredictable and chaotic became understandable, allowing for long-term predictions about the general character of the system, even if precise predictions were still impossible. It has been used in the study of epilepsy, economic systems, physics, politics, population growth, philosophy, electrical engineering, and numerous other fields to help understand complicated behaviors and what governs them. MT
1963
Sexism
Betty Friedan
The practice of discriminating against people based on their gender
Betty Friedan railed against the traditional expectation for women to be full-time homemakers.
French author Simone de Beauvoir (1908–86) had promoted feminism with her book The Second Sex (1949), but by the 1960s women still had a long way to go to secure all the rights enjoyed by men. A second wave of feminist thought was sparked by another, equally influential work, The Feminist Mystique (1963) by U.S. writer and feminist Betty Friedan (1921–2006). Women, said Friedan, were expected to lead either a powerless and unfulfilling life as a homemaker or to sacrifice the possibility of having children in protection of their careers. She also noted that society’s authority figures and media argued that women were unsuitable for many jobs simply because they were women.
A “sexist” argument is any that ascribes women’s achievement, or lack of it, in a specific area to biology. Sexism, much like racism, is used by individuals or groups who possess power or influence in society to maintain their status. It is a linguistic instrument used by one person or group to impose their will upon another. Sexism can exist in any culture and be expressed by either gender.
“The sexist myth is the … most pervasive myth the world has ever told itself …”
Sheldon Vanauken, author
Acknowledging the existence of sexism and underscoring its kinship with racism were essential to the feminist movement, which argued for equality of opportunity for men and women. Feminism contributed much to a larger critique of traditional society that occurred in the 1960s and 1970s. Civilization as defined by male hierarchy and domination was questioned, and discussions of sexism changed not only our view of world history but also how we speak and write about ourselves and the world. CRD
1964
The Global Village
Marshall McLuhan
The electronic media enable all people to be interconnected through technology
In a staged photograph of 1967, Marshall McLuhan poses with a television and televised images of himself.
Canadian philosopher of communication theory Marshall McLuhan (1911–80) coined the phrase “the global village” in reference to how communications technologies—radio, television, and, in the present day, the Internet and wireless communications—allow individuals to experience events in “real time,” even if they are separated from those events by time or distance. As he argued in his book Understanding Media: The Extensions of Man (1964), this was a positive development since it would engender a shared sense of community and responsibility for the actions and the fates of all the world’s inhabitants. Everyone would then experience the closeness and intimacy known to successive generations of a small, rural village, but through the present-day medium of technology.
“ … we have extended our central nervous system itself in a global embrace …”
Marshall McLuhan, Understanding Media (1964)
McLuhan’s turn of phrase perfectly captured how revolutionary the effects of the communications technologies of the twentieth century would become in defining human existence. McLuhan understood that any change in technology would affect the social life of individuals, nations, and cultures. He believed that cultures evolved and irrevocably changed through the discovery and widespread dissemination of technologies of communication, and so each new technology marked a new stage in a culture’s development.
McLuhan was one of the most articulate theorists of the impact of globalization and the new forms of media that emerged after the advent of computing. His phrase is routinely used to describe the shared responsibility for the lives of others brought about by the sense of immediacy produced by electronic media. CRD
1964
The Little Red Book
Mao Zedong
Core principles of the Communist Revolution in China, presented by Mao
The Little Red Book, or Quotations from Chairman Mao Tse-tung, first distributed in China in 1964, was a distillation of the thought of Mao Tse-tung (1893–1976), the leading theorist of Chinese Marxism or Maoism. Mao himself did not compile The Little Red Book; it was a selection of his speeches and writings spanning many years. Mao argued that nothing existed in the Chinese nation except for the masses and the Chinese Communist Party. All existence was war between social classes. His book instructed, “Everyone lives as a member of a particular class, and every kind of thinking, without exception, is stamped with the brand of a class.” Revolutionary war was the “war of the masses, impossible for any force on earth to smash.” The Little Red Book promised nothing but a bright future for China, where the wealth of the country was created by “the workers, peasants, and working intellectuals.”
“We must have faith in the masses and we must have faith in the Party.”
Mao Zedong
Mao argued that conflict between social classes was the driving force behind history. The future of China lay in the ability of the nation to modernize its industry and agriculture. China was also to become an economically self-sufficient nation, capable of defending itself from capitalist countries bent on its destruction. China was finally to lead an international revolution in which capitalism and the enemies of socialism and equality were to be vanquished for good. This would lead to an era of prosperity and harmony.
The book was perhaps one of the most printed in history as it was required reading in communist China during Mao’s lifetime. The work was very widely distributed, with many committing it to memory. CRD
1964
Hamilton’s Rule / Kin Selection
W. D. Hamilton
Altruism within a family group makes sense in evolutionary terms
Hamilton’s Rule explains the cooperative behavior of social insects such as the honey bee.
Although British evolutionary biologist W. D. Hamilton (1936–2000) did not coin the term “Hamilton’s Rule,” he gave it firm empirical backing in his two papers titled The Genetical Evolution of Social Behavior (1964). The rule refers to a definition by British evolutionary biologist John Maynard Smith (1920–2004): “By kin selection I mean the evolution of characteristics which favor the survival of close relatives of the affected individual.” Hamilton’s Rule argues that if an individual lays down his life for his fellows—say two brothers (though not one brother only) or four cousins—then, evolutionarily, it is a fair deal. If a behavior increases the fitness of related individuals, or if more members of the family survive due to the actions of one individual, then this would more than compensate the loss of fitness at the level of the individual, since all relatives carry a copy of the gene that might favor survival of the family.
Naturalist Charles Darwin (1809–82) had said: “No instinct has been produced for the exclusive good of other animals, but each animal takes advantage of the instincts of others.” Later studies have demonstrated that animals frequently aid distressed relatives while ignoring the travails of unrelated individuals, even if the benefits of aiding those relatives outweigh the reproductive cost of aiding them.
Hamilton’s Rule transformed our view of nature. After Darwin published his findings, people saw the natural world as a space of fierce competition between individuals in the struggle for resources. However, this view was unable to explain a number of animal behaviors, particularly human ones. The theory of kin selection explains the prevalence of altruism and sacrifice in animals, and also many of the cooperative behaviors that maintain law and order. It elucidates the basis of solidarity among families at the level of evolutionary fitness, and gives us a glimpse into how the first societies evolved. CRD
1964
Artworld Theory
Arthur Danto
“Art” is “art” only because its audience has a shared understanding of what art is
One and Three Chairs (1965), a conceptual work by Joseph Kosuth, has full Artworld approval.
In the essay “The Artworld,” first published in October 1964 in the Journal of Philosophy, U.S. art critic and philosopher Arthur Danto (b. 1924) argued that art was art only if the audience interpreted it as existing within the “Artworld.” A work such as Brillo Pad Box by Andy Warhol (1928–87) was art—and not merely an item available at any store—because the audience that was viewing the work understood it as art. Danto argued that objects gained this recognition through the transformative power of “the theory of art,” which set the rules for distinguishing art from non-art. The theory of art itself relied on specific norms and canons of interpretation that were supplied by what Danto termed the “Artworld.”
The Artworld was a conception of art informed not merely by the past work of artists but also by the collective opinion of scholars, patrons, audiences, and journalists on the nature of art. It encompassed what museums had considered to be art through the ages. Importantly, art was not merely an imitation, as had been supposed in classical times, nor did it depend upon the opinion of a single individual. Danto’s conception of the Artworld thus solved the problem of how discussions about art could be both a subjective appraisal and a conclusion commanding universal assent.
Danto’s essay proved hugely influential, in particular on the Institutional Theory of Art that was put forward by U.S. philosopher of art George Dickie (b. 1926). Danto’s conception of art as defined by culture, tradition, and audience interpretation allowed art criticism to take into account, for the first time, the social context of the reception of a work of art as well as the history of interpretations of what counted as art. It fused present and past in a novel way, allowing art criticism to emerge from “aesthetics” and into the broader realm of history and social life. CRD
1964
Higgs Boson
Peter Higgs
The most elementary type of particle explains how and why “mass” exists
A graphic of a proton-proton collision, one of many recorded during the search for the Higgs boson.
The Higgs boson was first posited in 1964 by British theoretical physicist Peter Higgs (b. 1929). The existence of the particle type was tentatively verified on July 4, 2012, using measurements from the Large Hadron Collider (LHC) in Switzerland. The existence of the Higgs boson validates the “Standard Model” of particle physics and explains how and why elementary particles (electrons, protons, and neutrons) have mass. Just as important, this elementary particle explains why there are different forces, such as electromagnetic force, existing between particles in an atom.
By the 1960s, physicists had discovered a number of elementary particles and the forces that governed their interaction. The Standard Model described all the particles that make up matter and the interactions between them, including electrons, protons, and neutrons, and the particles that make up each of those. The Standard Model, due to a mathematical quirk, argued that all of the particles must be without mass. Mass must then emerge from interaction between the particles. Scientists assumed that the particles interacted with a “Higgs field” filled with Higgs bosons, which gave them mass by transferring energy to them.
“We open a new window … into … 95 percent of the unknown universe.”
Rolf-Dieter Heuer, particle physicist
The verification of the existence of the Higgs boson vindicated large-scale science projects funded by government, such as the LHC. With the Standard Model complete, there has been discussion of moving on to a “new physics” that could potentially lead to novel technologies, much in the same way that discoveries in physics paved the way for advances in electronics and computing technology. CRD
1964
Identity Theft
United States
Stealing a person’s identity for the purposes of espionage or financial gain
As the Billings Montana Gazette reported in 1964, “Four Americans who suffered a theft of their identities were listed on Tuesday as government witnesses at the Brooklyn spy trial of a Russian couple, who used their names.” In the Cold War context, identity theft was defined as using a person’s name to obscure or hide their actual identity as spies. More recently, identity theft has become associated with the assumption of a person’s identity for the express purpose of financial gain through fraud. This change in definition points to a fundamental shift in U.S. culture.
Identity theft in the more contemporary sense has grown more common as ways of stealing personal, particularly financial, data have proliferated while the threat of Russian spies has diminished. Although the Internet provides thieves with an inordinate number of ways to steal bank details, addresses, and telephone numbers in order to perpetrate fraud (including “phishing,” posing as legitimate entities to obtain personal information), identity theft is also perpetrated using the regular mail.
“Surfing the Internet provides new frontiers for identity thieves.”
San Diego Daily Transcript (1994)
In response to the methods used by criminals, banks have imposed protections such as encryption software for financial transactions. The United States Congress has passed a number of laws designed to protect its citizens against the worst effects of identity theft. An entire field of “fraud protection” has emerged in the United States in order to better define the laws concerning the definition and punishment of identity theft. Numerous U.S. federal agencies now prosecute identity thieves. CRD
1964
Democratic Peace Theory
Dean V. Babst
The notion that democratic governments do not fight each other
U.S. president John F. Kennedy poses with British prime minister Harold Wilson in c. 1960. Historically, their two countries have stood united against a number of totalitarian regimes.
In 1964, U.S. sociologist Dean V. Babst (1921–2006) published the first paper on democratic peace theory in the Wisconsin Sociologist. Drawing on A Study of War (1942) by U.S. political scientist Quincy Wright (1890–1970), Babst argued that since the emergence of the United States in 1789, not a single democratic nation had gone to war with another democratic nation until 1941. For Babst, this was extremely significant because democratically elected governments in the nineteenth and twentieth centuries “have grown greatly in number and size to become a world force.” Babst believed that the continued proliferation of democratic governments would lead to a human future with far less bloodshed and armed conflict.
“And the reason why I’m so strong on democracy is democracies don’t go to war with each other. I’ve got great faith in democracies to promote peace.”
George W. Bush, U.S. president 2001–09
According to the democratic peace theory, democratically elected governments do not go to war with one another. Democracies have not gone to war with one another throughout history because individuals in democratic nations may choose whether to go to war or not, and, given the choice, will not do so. Factors characterizing a country having such a choice are freedom of speech and press, and also the establishment of decision-making capacities of government in freely elected institutions, such as the U.S. Congress or the British Parliament, rather than within the jurisdiction of a hereditary ruler or a dictatorial regime.