Democratic peace theory remains central to many assumptions of the United States’ foreign policy. The administrations of George H. W. Bush (1989–93) and Bill Clinton (1993–2001) argued that a post-Soviet Russia would pose no military threat if it introduced democratic reforms. Democratic peace theory was also central to the George W. Bush administration’s justification of the Iraq War in 2003 as a conflict waged to protect U.S. interests at home while promoting democracy abroad. CRD

1964

SEALAB

U.S. Navy

Underwater habitats test the effects of isolation and deep-sea diving on humans

Aquanaut Scott Carpenter stands atop SEALAB II and gives the signal for it to be lowered in 1965. Carpenter joined the SEALAB project after working as an astronaut for NASA.

The ocean deeps offer more than the possibility of discovering new species in an unfamiliar habitat. There is also much to learn about the physiological and psychological effects of placing human beings for extended periods in the extremely isolated, artificial environment of a deep-sea station.

“During the 1960s, new frontiers were being explored—space and sea.”

Michelle Brown, Santa Barbara Independent (2012)

The United States Navy SEALAB program enabled human researchers, for the first time in history, to engage in research and salvage operations on the floor of the ocean. It was conceived as part of the navy’s “Man in the Sea Program,” which was tasked with understanding the physical effects on humans of increased barometric pressure, a majority helium atmosphere, and the physiological effects of prolonged isolation. All of the relatively small, steel SEALAB structures—comparable to submarines but without engines—were connected by a hose that supplied fresh water. There were three SEALAB submersible habitats, manned by so-called “aquanauts.”

SEALAB I was lowered into the water off the coast of Bermuda in July of 1964, although the experiment was halted after only eleven days due to an impending tropical storm. SEALAB II was operational in 1965, and introduced a number of novelties not found in the first structure: showers, a laboratory to record conditions on the ocean floor, and a trained porpoise, Tuffy, who delivered supplies in response to a buzzer. SEALAB III, completed in 1969 and submerged at much greater depth than the previous two SEALABS, was scrapped after it developed a leak.

During the 1960s, SEALAB was a national sensation. It improved deep-sea salvage and rescue techniques, and provided psychiatrists and other medical professionals with a wealth of new information about effects of the deep-sea atmosphere. Discussions of human isolation pertaining to Mars missions now incorporate observations from SEALAB. CRD

1965

Repressive Tolerance

Herbert Marcuse

Repression of free thought under the guise of expression of free thought

The idea of repressive tolerance was launched in 1965 by an eponymous essay by German philosopher, sociologist, and political theorist Herbert Marcuse (1898–1979). Marcuse explained how liberal democracies inhibit constructive social change; affluence produces a surplus of consumer items, busyness, sexual provocations, and chatter, all paradoxically promoting a repressed and uncritical society. This is no historical accident, but a totalitarian, capitalist system seeking to maintain itself.

Marcuse specifically identified the promotion of tolerance as a powerful tool in suppressing alternative social developments. He characterized tolerance of repressive speech by the public as “inauthentic” since it doubly ensures that marginalized voices will never be heard. This apparent contradiction is pragmatically sensible: if the raison d’être of tolerance is to preserve the ground upon which free speech is to survive, anything that would undermine the significance and possibility of free speech must be resisted: “the ways should not be blocked on which a subversive majority could develop”.

“Liberating tolerance, then, would mean intolerance against … the Right …”

Herbert Marcuse, Repressive Tolerance (1965)

So, what is it that transmutes a policy of free speech into an instrument of repression? Rational discussion is no longer possible because various media have produced a commonsense totalitarian view. Any departure from the commonsense view is blocked. What no one should tolerate are linguistic blocks to free thought, such as so-called “political correctness.” Until that is recognized, modern democracy will be barely distinguishable from fascism. JC

1965

Cryonics and Cryogenics

Karl Werner

Preserving the deceased in anticipation of remarkable future medical advances

A cryo-capsule containing the body of California psychology professor James Bedford (1967).

In 1965 a New York industrial designer named Karl Werner coined the word “cryonics”—derived from the Greek for “icy cold”—to refer to the science of low-temperature preservation of humans and animals whose lives cannot be sustained by contemporary medicine. The term “cryogenics” stems from the Greek for “the production of freezing cold,” and is used today for a low-temperature state. The point on the temperature scale at which refrigeration ends and cryogenics begins is not well defined, but most scientists assume it to be -240 °F (-150 °C) or below.

The rationale for cryonics is that people who are considered dead by current legal or medical definitions may not necessarily be dead according to the more stringent “information-theoretic” definition of death. Information-theoretic death describes the condition of a person whose recovery is theoretically impossible by any physical means. The concept arose in the 1990s in response to the problem that, as medical technology advances, conditions previously considered to be death, such as cardiac arrest, have become reversible.

“ … cryonics cannot be dismissed simply by calling its subjects ‘dead.’”

Brian Wowk, “Medical Time Travel” (2004)

Cryonics advocates say that current technology can preserve the fine cell structures of the brain in which memory and identity reside, and that demonstrably reversible cryopreservation is not necessary to achieve preservation of brain information that encodes memory and personal identity. They believe that the anatomical basis of mind can be preserved sufficiently to prevent information-theoretic death until future repairs might be possible. JC

1965

The Atkins Diet

Robert Atkins

A concerted approach to weight loss by choosing what to eat

Whether our cave-dwelling ancestors ever worried about their diet of woolly mammoth is unrecorded. But throughout history, people—mainly women—have altered their diet in order to achieve some desired effect. It is therefore surprising that the first major book about dieting appeared as recently as 1972. Thanks to Dr. Robert Atkins (1930–2003), whose Dr Atkins’ Diet Revolution was an instant best seller, we are now far more aware of the effects of what we eat, and are awash with such instructional manuals.

“A revolution in our diet thinking is long overdue.”

Dr. Robert Atkins, Dr Atkins’ Diet Revolution (1972)

Atkins was a doctor with a medical degree from Cornell University Medical College and a private cardiology practice in New York. As a result of the stress of his work and poor eating habits, his weight ballooned to 224 pounds (100 kg). An article in the October 1963 issue of The Journal of the American Medical Association exploring the research of Dr. Alfred Pennington, who recommended removing all starch and sugar from meals and increasing fat and protein, prompted Atkins to try this restrictive diet, with immediate effect. He recommended the diet to his patients and, in 1965, appeared on “The Tonight Show” to promote his own dietary plan to lose weight. This regime was published in Vogue magazine in 1970 and was known for many years as “The Vogue Diet.” The publication of Atkins’s first book in 1972 opened the floodgates of his success.

Not everyone was convinced by the Atkins diet, arguing that his idea that carbohydrate is the bad guy oversimplifies the metabolic process and that his proposals for a low-fiber diet could have long-term medical consequences. For his part, Atkins acknowledged that he did not know how his diet worked, and never put the diet to peer review. His steady stream of books and articles, however, found a ready audience anxious to lose weight as their waistlines expanded. SA

1966

Less Is a Bore

Robert Venturi

A postmodern counter to Mies van der Rohe’s well-known modernist dictum, “Less is more”

Architect Robert Venturi designed numerous articles of bespoke furniture, including this “Gothic Revival” chair (1984), painted by Philadelphia artist R. Michael Wommack.

German-U.S. architect Ludwig Mies van der Rohe (1886–1969), like many of his post-World War I contemporaries, had wanted to establish an architectural style that expressed the modernist ethos of the twentieth century. The style he created celebrated extreme clarity and simplicity and was characterized by a minimal framework of structural order, balanced against an implied freedom of free-flowing open space. In his mature buildings he used modern materials, such as industrial steel and plate glass, to define interior spaces. He called his buildings “skin-and-bones” architecture.

“Each city is an archetype rather than a prototype, an exaggerated example from which to derive lessons for the typical.”

Robert Venturi

During the 1950s, U.S. architect Robert Venturi (b. 1925) emerged as a prominent critic of the functionalist and symbolically empty architecture of corporate modernism. In 1966 he published what he called his “gentle manifesto,” Complexity and Contradiction in Architecture, in which he made a case for the “difficult whole” rather than the diagrammatic forms popularized by van der Rohe. The manifesto demonstrated, through many examples, an approach to understanding architectural composition in terms of complexity, richness, and interest. Drawing from both vernacular and high-style sources, Venturi introduced fresh lessons to be learned from examining the buildings of architects as varied as Michelangelo, Alvar Aalto, Frank Furness, and Edwin Lutyens.

Venturi’s own buildings typically juxtapose architectural systems and elements, and aim to acknowledge the conflicts often inherent in a project or site. This “inclusive” approach contrasted with the typical modernist effort to resolve and unify all factors in a complete and rigidly structured—and possibly less functional and more simplistic—work of art. Venturi’s work arguably provided a key influence at important times in the careers of architects Robert Stern, Philip Johnson, Michael Graves, Graham Gund, and James Stirling, among others. JDP

1966

LaVeyan Satanism

Anton LaVey

A religious philosophy based on individualism, self-control, and “eye for an eye” morality, rejecting the teaching of traditional religions such as Christianity

Anton LaVey poses in front of an inverted pentagram, a symbol of the Church of Satan, in 1967. The downward pointing vertices represent opposition to the Trinity.

The Church of Satan was founded by Anton LaVey (1930–97) on April 30, 1966. In 1969 LaVey published The Satanic Bible, a collection of essays, observations, and rituals relating to the teachings of the church. Rather than advocating evil, LaVeyan Satanism (often referred to simply as “Satanism”) promotes humanistic values such as self-assertion, rebellion against unjust authority, vital existence, and undefiled wisdom. Satanists regard The Satanic Bible as an authoritative text, and it has been referred to as “quasi-scripture.”

“Blessed are the destroyers of false hope, for they are the true Messiahs.”

Anton LaVey, The Satanic Bible (1969)

Satanism asserts that the fundamental truth of the nature of reality is not known and that doubt is vital in the absence of proof. Satanism does not hold that “a life appropriate to a rational being” is the sole standard of moral right. If anything, Satanism holds that indulgence in life or “fun” is the highest standard of ethics. Satanists see reason as a means to knowledge but do not deem its possession as morally significant. Rather, the Satanic view sees as ethical the reality of domination of the weak by the strong. Believers have been described as “atheistic satanists” because they believe that God is not an external entity, but rather a projection of a person’s own personality—a benevolent and stabilizing force in a person’s life.

The Church of Satan proved popular with the media, thanks to tales of eccentric rituals at LaVey’s home and its association with celebrities such as Jayne Mansfield and Sammy Davis, Jr., and by the time that The Satanic Bible was released in 1969 its membership had grown to well over 10,000 worldwide. Fractures in the organization appeared in the 1970s, leading to the formation of numerous alternative Satanist groups, but LaVey’s ideas have continued to find followers. Since its publication there have been thirty printings of The Satanic Bible and more than a million copies have been sold. JP

1966

Database

IBM

Electronic data should be organized for future use and ease of retrieval

Within any information storage unit, from a simple card index to a mainframe computer, a database is a collection of data designed and prearranged to aid its retrieval and use. The term “database” was first coined by workers attending to military computer systems and referred to information storage and retrieval techniques on “time-shared” computers, which were used by individual scientists to run specific programs for a limited amount of time, sharing the computing power of massive pieces of hardware.

In the 1960s, with the increase in data stored by computers and the lowering of average costs per unit moving the computer from the halls of government to private industry, two database system models were developed. The first was a hierarchical model known as IMS, introduced by IBM in 1966, which enabled the user to view the data as a hierarchical tree. A network model, in which different record types could be linked, was developed by CODASYL (Conference on Data Systems Languages) in 1969, based on a system invented by U.S. computer scientist Charles Bachman (b. 1924). In an effort to supersede both systems, English computer scientist Edgar F. Codd (1923–2003) proposed a “relational model” in 1970, which organized the data into simple tables and defined the relationship between them. This purely logical organization of data could be used with any data-processing machine and enabled large amounts of data to be efficiently organized and quickly retrieved.

With the advent of the Human Genome Project, global warming research, and numerous scientific and technological experiments, the amount of data that computers are expected to store and process is immense. The storage, organization, and retrieval of such immense quantities of data have been made possible with the aid of database software. CRD

1966

Functionalism

David Lewis

A theory that mental states are constituted solely by their functional role

What is the mind, what role does it play in life, and how does it relate to consciousness and the physical body? The philosophical theory of functionalism was developed to address these issues.

At its core, functionalism is the idea that the mind is a function of the parts and processes of the brain. It states that what makes something a mental state depends not on its internal constitution but solely on its function, on the role it plays. Mental states—such as desire, distress, pain, belief, and so on—therefore have a purely functional role and are determined by sensory perceptions, behaviors, and other mental states. In his essay “An Argument for the Identity Theory” (1966), David Lewis (1941–2001) explored the meanings of these mental states in an approach often referred to as analytic or conceptual functionalism. He proposed that while terms such as “belief” or “pain” get their meanings from our common sense approach to language, that meaning is not rigorous enough in theoretical terms. It is necessary to state that a mental state C is a state that is preconceived by B and causes D. Pain, for example, is caused by putting your hand on a hot cooker and produces a loud cry and anger at the person who left the hotplate on. These different stages are all necessary to explain the mental state of pain.

“The definitive characteristic of any (sort of) experience as such is its causal role …”

D. Lewis, “An Argument for the Identity Theory” (1966)

At a theoretical level, functionalism has developed as an alternative to the old Cartesian dualist ideas of the mind and body being two separate entities, and the more recent behaviorist ideas that allow only for a physical existence. While erudite, it allows us to better understand the nature of the human mind. SA

1966

Kwanzaa

Maulana Karenga

A seven-day secular holiday celebrating African American culture and traditions

The Kwanzaa holiday, from December 26 to January 1, was introduced in 1966 by African American activist and author Maulana Karenga (b. 1941) as an annual observance designed to celebrate African American culture and traditions. Seven principles, the Kawaida—namely unity, self-determination, collective work and responsibility, cooperative economics, purpose, creativity, and faith—are promoted during the celebrations. The Kawaida are represented by seven candles in a holder called a “kinara,” and celebrants light the candles during the week. Kwanzaa celebrants also wear traditional African clothing and discuss shared values during celebrations, and enjoy feasting and present-giving on the final day.

Karenga created Kwanzaa so that African Americans would have an alternative holiday to Christmas, one that celebrated black culture, tradition, and history. The word kwanza means “first” in KiSwahili, but Karenga added an extra “a” to the word to give it seven letters, corresponding to the seven days and seven principles of the holiday. The holiday celebrates the first arrival of the yearly harvest, and although it focuses on African heritage and culture, it is not developed from any specific African tradition or holiday.

“This is black power in its most lasting form. It’s seen in the culture.”

Keith Mayes, professor and author

It is difficult to determine exactly how many people celebrate Kwanzaa every year, but estimates range widely, from about 500,000 to 5 million Americans, to as many as 30 million people or more around the world. The initial spread of Kwanzaa slowed after the Black Power movement of the 1960s and 1970s diminished in the United States, but it has retained its importance as a nonreligious holiday to many. MT

1966

Black Power

Stokely Carmichael and Charles V. Hamilton

Radical assertion of black independence rather than black social integration

U.S. athletes Tommie Smith and John Carlos give the black power salute at the 1968 Olympic Games.

In 1966 black activists Stokely Carmichael (1941–98) and Charles V. Hamilton (b. 1929) published Black Power, stating that blacks must work together to achieve their cultural, economic, and political liberation. They called for the black people of the United States “to unite, to recognize their heritage, to build a sense of community … to define their own goals, to lead their own organizations.”

Their approach was militant: “When you talk of black power, you talk of building a movement that will smash everything Western civilization has created.” Images of young people singing “We Shall Overcome” were replaced in the media by new ones of militant black men and women wearing black berets, raising their fists, and carrying guns. Goals of social justice and integration were replaced by ideas of black separatism and power, harking back to the Black Nationalism that had been preached in the 1920s by Marcus Garvey (1887–1940).

“I am black. I know that. I also know that while I am black I am a human being …”

Stokely Carmichael, “Black Power” speech (1966)

In 1966 and 1967, Carmichael lectured at campuses around the United States, North Vietnam, China, and Cuba. In 1967 he became honorary prime minister of the Black Panthers, the ultra-militant urban organization begun by Huey P. Newton (1942–89) and Bobby Seale (b. 1936). He moved to Guinea, West Africa, in 1969, calling on all black Americans to follow his example. In July 1969, he resigned from the Black Panther Party because of what he called “its dogmatic party line favoring alliances with white radicals.” He advised, “dismiss the fallacious notion that white people can give anybody their freedom … Black power can only be realized when there exists a unified socialist Africa.” JDP

1966

Virtual Reality

Ivan Sutherland

A computer-generated environment able to simulate physical presence

A researcher in the virtual-reality room at Tokyo University.

The origin of the term “virtual reality” can be traced back to French playwright, poet, actor, and director Antonin Artaud (1896–1948) in his seminal book The Theater and Its Double (1938), in which he described theater as “la réalité virtuelle,” in which “characters, objects, and images take on the phantasmagoric force of alchemy’s visionary internal dramas,” creating a “purely fictitious and illusory world in which the symbols of alchemy are evolved.”

In 1965, U.S. computer scientist Ivan Sutherland (b. 1938) envisioned what he called the “ultimate display.” Using this display, a person would look into a virtual world that appeared as real as the physical world. That world would be seen through a Head Mounted Display (HMD) and be augmented through three-dimensional sound and tactile stimuli. A computer would maintain the world model in real time, with users manipulating virtual objects in a realistic, intuitive way.

“[Virtual reality] is a looking glass into a mathematical wonderland.”

Ivan Sutherland

In 1966, Sutherland built the first computer-driven HMD; the computer system provided all the graphics for the display (previously, all HMDs had been linked to cameras). The HMD could display images in stereo, giving the illusion of depth, and it could also track the user’s head movements, allowing the field of view to change appropriately as the user looked around.

Mychilo Cline, in his book Power, Madness, and Immortality: The Future of Virtual Reality (2009), predicts that, as we spend more and more time in virtual reality, there will be a gradual “migration to virtual space,” resulting in unimagined changes in economics, worldview, and culture. JDP

1966

Cultural Revolution

Mao Zedong

A proposed total break with traditional ideas and customs

Mao Zedong (1893–1976) took power in China in 1949 as leader of the Chinese Communist Party. With private enterprise and land ownership abolished, by the mid-1960s he was seeking to take the transformation of Chinese society to a whole new level. Communists had previously assumed their revolution would make the best of traditional culture and education available to the masses. Instead, Mao turned culture into a revolutionary battleground. In February 1964 he expressed contempt for formal education: “I do not approve of reading so many books.” In 1965 his criticism spread to theater, with institutions such as the Beijing Opera being forced to stage revolutionary dramas.

In 1966 Mao formally launched the Great Proletarian Cultural Revolution, calling for an attack on four old elements in Chinese society: old customs, old habits, old culture, and old thinking. Chinese youth, organized into the Red Guards, was encouraged to denounce and humiliate its schoolteachers and university professors—especially shocking in a country where respect for elders was a fundamental Confucian value. Objects reflecting traditional Chinese culture were confiscated and destroyed, from chess sets to kites. University-educated intellectuals and officials were sent to work in factories and farms to learn from illiterate peasants and workers.

“Proletarian literature and art are part of the whole proletarian revolutionary cause …”

Mao Zedong

Mao’s radical program for the erasure of the corrupt past and his extension of the revolutionary struggle into the cultural sphere found many admirers in the West, where Maoists formed an important element in the student revolts of the late 1960s. RG

1967

Deconstruction

Jacques Derrida

Confronting the metaphysical illusions embedded in Western thought

Jacques Derrida in 1987. He once wrote, “I believe in the value of the book, which keeps something irreplaceable, and in the necessity of fighting to secure its respect.”

Deconstruction is a school of textual analysis that originated in France in 1967 with the publication of a book, Of Grammatology, by French philosopher Jacques Derrida (1930–2004). This form of analysis is appropriate to texts in which binary oppositions can be detected in the construction of meaning and values; typically these texts structure experience in terms of antinomies, such as essence/appearance and freedom/determinism.

“The philosopher is interested only in the truth of meaning, beyond even signs and names … the sophist manipulates empty signs … the poet plays on the multiplicity of signifieds.”

Jacques Derrida

Events are conceived as alive, singular, and nonrepeatable. The living being undergoes a sensation, and this is inscribed in organic material. The idea of an inscription leads Derrida to the other pole. The machine that inscribes is based in repetition: “It is destined, that is, to reproduce impassively, imperceptibly.” Derrida says: “To give up neither the event nor the machine, to subordinate neither one to the other … this is perhaps a concern of thinking …”

Derrida held that the function of oppositions should be studied, not to eliminate all oppositions (they are structurally necessary to produce sense) but to mark their “difference, undecidability, and eternal interplay.” In order to recognize the antinomy of, for example, appearance and essence, the first step is to recognize that there is “a violent hierarchy”—in this case, essence is regarded as more significant. The second step of deconstruction is to identify “difference”—that is, to identify the basis for the binary opposition. The previously inferior term must be redefined in a new way and reinscribed as the “origin” of the opposition.

Reverse this by proposing that appearances give rise to essences, as existentialist Jean-Paul Sartre (1905–80) did when he claimed, “Existence precedes essence.” Identify “difference,” the being that makes a difference but which we cannot identify. It is undecidable, for example, whether “now” is in the past or the future; insofar as the difference is undecidable, it destabilizes the original decision that instituted the hierarchy. JDP

1967

The Death of the Author

Roland Barthes

No single interpretation of a text can claim to be more authentic than any other

Photographed in France in 1972, Roland Barthes preferred the term “scriptor” to “author” because the latter confers a perceived “authority” in determining a work’s literary meaning.

“The death of the author” is a postmodern theory of literary criticism formulated by French literary theorist and philosopher Roland Barthes (1915–80) in an essay of the same name (1967). The theory contends that a writer’s intentions, and the context in which their work was created, do not need to be taken into account in interpreting a text for that interpretation to be valid. This is in stark contrast to the belief that art is a medium of communication between author and reader, and the meaning or message that the artist intended to communicate through their art is the only significant aspect of that work of art.

“The birth of a reader must be at the cost of the death of an author.”

Roland Barthes

Barthes’s idea was anticipated by the philosophical insights of W. K. Wimsatt and M. C. Beardsley who declared, in The Intentional Fallacy (1954), that a poem does not belong to its author, and to think otherwise is to commit the intentionalist fallacy; the implication is that objective and valid meaning can be extricated.

The popular criticism of Barthes’s theory, and of aspects of postmodernism in general, is that denial of objective meaning in art and text means that every subjective interpretation of that art is equally valid. For example, a popular interpretation of Shakespeare’s Hamlet is based on the Freudian concept of the “Oedipus Complex.” A critic would point out that Shakespeare’s text predates Freud’s theories by several centuries, and therefore Shakespeare could not have intentionally embedded such meanings in his work. The postmodernist would counter that all meaning derived from the work of art exists only in the perceptions of the object’s audience, and as such meaning is so subjective that attempting to distinguish these perceptions as more or less “real” or “valid” is a pointless practice. If that is the case, one question remains: how do we distinguish authentic interpretations from those that fail to reflect the viewer’s or reader’s response? JDP

1967

Six Degrees of Separation

Stanley Milgram

Advances in travel and communication have greatly reduced social distance

If a person is one step away from each person he or she knows, and two steps away from each person known by one of the people he or she knows, then everyone is an average of six “steps” away from each person on Earth. This idea was originally set out in a short story, “Chain-Links” (1929) by Fryges Karinthy (1887–1938), and later popularized by a play, Six Degrees of Separation (1990), by John Guare (b. 1938).

In 1967 U.S. social psychologist Stanley Milgram (1933–84), designed the “Small World Experiment” to measure this connectedness empirically. He dispatched various packages to randomly selected people who were to deliver them to a specific person, who lived in Boston. Along with each forward of the package, a postcard was sent to Milgram’s office so he could record the relationship that existed between sender and receiver. After repeating the experiment many times using people from different cities, the results revealed that there were approximately five to six links connecting any two people. The research was groundbreaking in that it suggested that human society is a small-world network characterized by short path lengths. The experiments are often associated with the phrase “six degrees of separation,” although Milgram did not use this term himself.

“I am bound to everyone on this planet by a trail of six people.”

John Guare, Six Degrees of Separation (1990)

Milgram published the results of his research, and the article generated longstanding interest in the idea. However, detractors argue that Milgram’s experiment did not demonstrate such a link, and the “six degrees” claim has been decried as an “academic urban myth,” valid only in particular social contexts. JDP

1967

Linguistic Turn

Richard Rorty

A new approach to the relationship between philosophy and language

The phrase “linguistic turn” was popularized with the publication in 1967 of an anthology, The Linguistic Turn, edited by U.S. philosopher Richard Rorty (1931–2007). The book contained studies of how philosophical paradoxes might be solved by learning how to increase our understanding of the language we use to describe them.

A linguistic turn is a proposition that argues, in part, that language in and of itself constructs its own reality. For example, in discussion of the attack on the World Trade Center in New York City in 2001, a person arguing the validity of the linguistic turn would say that we are not really talking about terrorism, aircraft, acts of heroism, or the collapse of buildings, we are really just analyzing the words we use in discussing the subject. Issues—social, political, ethical—have always been framed in terms of language, and proponents of the linguistic turn believe that we should try to understand the nature and structures of language as best we can.

The concept of the linguistic turn first surfaced in the early decades of the twentieth century and was used by thinkers such as British philosopher A. J. Ayer (1910–89), a proponent of logical positivism, who said in a paper in 1936: “The philosopher, as an analyst, is not directly concerned with the physical properties of things. He is concerned only with the way in which we speak about them. In other words, the propositions of philosophy are not factual, but linguistic in character.”

Language, of all things, was suddenly being put forward as a central philosophical theme. Proponents of the linguistic turn, with their notion that language itself can constitute its own reality, often found themselves at odds with the broader philosophical movement. But they were asking interesting and important questions: How do words relate to everyday things? What makes a sentence true or false? BS

1967

Atomic Model

Stephen Weinberg and Mohammad Abdus Salam

The nature of physical reality at its indivisible atomic level is proven mathematically after centuries of philosophical thought and scientific hypotheses

Diagram of an atom. Electrons orbit the central nucleus, which consists of protons (red) and neutrons (blue). Each proton is made of two “up” quarks (dark blue) and one “down” quark (green).

In 1803, English chemist and physicist John Dalton (1766–1844), built on the ancient Greek idea of the atom as an indivisible unit by putting forward the idea that gaseous elements are composed of “particles” that are indivisible and indestructible. Dalton even published a table of atomic weights relative to hydrogen.

“From time immemorial, man has desired to comprehend the complexity of nature in terms of as few elementary concepts as possible.”

Mohammad Abdus Salam

The modern atomic model, however—showing electrons in constant motion in an electron “cloud” around a nucleus composed of neutrons and protons—did not arrive in one giant leap forward, or Eureka moment. The development of the atomic model evolved gradually, with each new version taking account of shortcomings of its predecessor. There was the “plum-pudding” model (1898), the cubic model (1902), the Saturnian model (1904), the Rutherford model (1911, in which Rutherford declared an atom to be 99.99 percent empty space), and the Bohr planetary model (1922), which had electrons moving in precise orbits around the nucleus in much the same way as planets circled the sun. The neutron was discovered in 1931, leading to the “gauge theory”—the basis for the standard model—in 1954. Antiprotons were first postulated in 1955, quarks in 1964, and on it went.

In 1960, U.S. theoretical physicist Sheldon Glashow (b. 1932) put forward his “electroweak theory,” involving the unification of electromagnetism and “weak force,” one of the four fundamental forces of nature responsible for the decay of subatomic particles. U.S. physicist Steven Weinberg (b. 1933) and Pakistani physicist Mohammad Abdus Salam (1926–96) took Glashow’s work a step further by taking the Higgs mechanism, a process that imparts mass to elementary particles, and adding it to Glashow’s electroweak theory. Abdus Salam proved the theory mathematically in 1967 and the modern atomic model was born. BS

1967

Reggae

The Pioneers

A popular music style combining a slow beat with politically aware lyrics

Internationally renowned reggae singer and guitarist Bob Marley performs in Los Angeles in 1979.

Reggae is a form of popular music that came to the fore in the late 1960s in Kingston, Jamaica, developing out of rhythm and blues, rocksteady, and ska music. The earliest recorded example of the genre is thought to be “Long Shot (Bus’ Me Bet),” made in 1967 by Jamaican vocal trio The Pioneers. Jamaica had gained its independence in 1962 and, initially, reggae lyrics were identified with raising black consciousness in the new country, in addition to attacking economic and social injustices from a black perspective.

The producers Duke Reid (1915–75) and Coxsone Dodd (1932–2004) were influential in creating the reggae sound. Early reggae artists include Toots and the Maytals, Bob Marley & The Wailers, and Desmond Dekker. Reggae is based on a dance beat, being played in either 4/4 or swing time, and it is notable for its offbeat rhythms and staccato chords played on the offbeats of the musical measure. The rhythm is driven by a variety of instruments: drums, electric and bass guitars, and sometimes “scrapers” consisting of a corrugated stick being rubbed by a plain stick.

“Play I some music: (dis a) reggae music! / Roots, rock, reggae: dis a reggae music!”

Bob Marley, “Roots, Rock, Reggae” (1976)

In 1972, Jamaican reggae musician and actor Jimmy Cliff (b. 1948) starred in the film The Harder They Come, which helped reggae reach an international audience. The music spread to the United Kingdom and was adopted by bands such as UB40 and Aswad, as well as rock superstar Eric Clapton. Reggae has since become fused with local musical genres around the world and various reggae subgenres have developed over the years, with names such as roots reggae, dub, ragamuffin, and reggaeton. CK

1967

Anti-psychiatry

David Cooper

Psychiatry labels behavior it does not support or understand as mental illness

The term “anti-psychiatry” was coined by South African psychiatrist David Cooper (1931–86) in his book Psychiatry and Anti-Psychiatry (1967). According to Cooper, mental illness was not illness at all, and what psychiatrists classified as “madness” was a form of self-expression that they deemed unacceptable.

The anti-psychiatry movement had its intellectual roots in the work of an earlier generation of sociologists and psychologists for whom mental illness was not the result of biology or genetic inheritance but a label imposed by society upon individuals who did not adhere to social norms. Cooper, himself a trained psychiatrist, considered traditional psychiatry to be antithetical to human growth and human potential. For Cooper and many others involved in the anti-psychiatry movement during its heyday in the 1960s and 1970s, traditional notions of mental illness and their associated treatments, especially shock therapy, were oppressive and unjustifiable. Mental illness was a “myth,” a diagnosis primarily imposed upon individuals whom society considered eccentric or unmanageable. For anti-psychiatrists, even schizophrenia amounted to nothing but society’s attempt to restrict the freedom of thought and expression of certain individuals.

“Madness … is a movement … toward autonomy. This is [its] real ‘danger’ …”

David Cooper, The Language of Madness (1978)

Anti-psychiatrists were among the first to argue that homosexuality should not be classified as a mental illness, and were also among the strongest supporters for the “deinstitutionalization” of the mentally ill in the 1980s. Due to their efforts, the lay world is now more skeptical of the claims of psychiatrists and the efficacy of their treatments. CRD

1967

Constructivist Epistemology

Jean Piaget

Knowledge is constructed by individuals from their own sensory experiences

Jean Piaget (here photographed in 1975) combined aspects of developmental psychology and constructivist epistemology to produce his influential doctrine of genetic epistemology.

The expression “constructivist epistemology” was first used in 1967 by Swiss psychologist and philosopher Jean Piaget (1896–1980) in the volume “Logical and Scientific Knowledge” of the Gallimard Encyclopedia of the Pleiades. Constructivism proposes new definitions for knowledge and truth based on inter-subjectivity instead of objectivity. Piaget argued that knowledge “does not reflect an objective ontological reality, but exclusively an ordering and organization of a world constituted by our experience” within the constraints of reality. Knowledge does not match reality: it organizes it. The philosophical forerunners of this view of knowledge are said to be Giambattista Vico (1678–1744) and George Berkeley (1685–1753).

“Verum esse ipsum factum: The true is precisely what is made.”

Giambattista Vico, political philosopher

Piaget asserted that children learn actively through an exploration of their environment. Piaget’s learning theory looks through a developmental lens, and considers stages of knowledge acquisition to be directly linked to a child’s developmental stage. Piaget’s theory of “genetic epistemology” contributed greatly to the development of constructivist theory.

Constructivist epistemology asserts that the only tools available to a knower are the senses. It is through seeing, hearing, touching, smelling, and tasting that an individual interacts with the environment. The individual builds a picture of the world from this sensory data. Therefore, constructivism asserts that knowledge resides in individuals; that knowledge cannot be transferred intact from the head of a teacher to the heads of students. The student tries to make sense of what is taught by trying to fit it with their personal experience. On the other hand, meaning does not lie in words, which are based on the constructions of individuals. Communication is possible because individuals’ meanings of words only have to be compatible with the meanings given by others. JP

1967

Lateral Thinking

Edward de Bono

The attempt to override standard assumptions in order to access original ideas

Edward de Bono demonstrates the logical way to tie a balloon. He is widely credited with expanding people’s capacity to think, although critics call for evidence of his success.

Maltese physician and author Edward de Bono (b. 1933) invented the term “lateral thinking” (literally, sideways thinking) and it first appeared in print in his book, The Use of Lateral Thinking (1967). De Bono wrote of the importance of disrupting the dominant patterns preferred by the human brain in order to facilitate potential creative abilities. He explained that with logic a person starts out with certain ingredients, just as a chess player starts out with given pieces. But what are those pieces? In most real-life situations, the pieces—certain perceptions, concepts, and boundaries—are not given, they are just assumed. Lateral thinking is concerned, not with playing with the existing pieces, but with seeking to change those very pieces. It is concerned with perception and organization of the external world into new pieces that we can then “process.”

“Removing the faults in a stage-coach may produce a perfect stage-coach, but it is unlikely to produce the first motor car.”

Edward de Bono

Lateral thinkers use various acts of provocation to incite ideas that are free from prior assumptions. Random words, new stimuli, concept reversals, and other tricks are used deliberately to shift perceptional assumptions for the purpose of generating fresh observations and insights. The popular term for this creative activity is “thinking outside of the box.”

Techniques for lateral thinking can be taught, and in 1974 de Bono put forward an education program named CoRT Thinking, which introduced some eighty different tools for thinking. CoRT helps the student to clearly differentiate lateral thinking skills from logical analyses (vertical thinking) and training in creativity/sensibility (horizontal thinking). Lateral thinking is a genuine alternative to training in other forms of thinking, but must be taught using a didactic and pedagogical approach. It has been successfully integrated into teacher-training programs and the school classroom since de Bono’s book was published. JP

1967

The Society of the Spectacle

Guy Debord

In modern consumer society, imagery has become more significant than reality

This cinema audience exemplifies Dabord’s notion of social relationships being mediated by images.

La Société du spectacle (The Society of the Spectacle, 1967) is a work of philosophy and critical theory by French Marxist theorist Guy Debord (1931–94) that traces the development of a modern society in which authentic social life has been replaced with its representation. The work shares many of the assumptions of German critical theorists Theodor Adorno (1903–69) and Max Horkheimer (1895–1973), as presented in their book Dialectic of Enlightenment (1944).

Debord argues that, in a consumer society, “passive identification with the spectacle supplants genuine activity.” Inevitably, social life moves away from “being” to “having,” proceeding into a state of “appearing to have,” or virtual living. “The spectacle” uses imagery to convey what people need and must have.

“All that once was directly lived has become mere representation.”

Guy Debord, The Society of the Spectacle (1967)

In his social analysis, Debord sees the quality of life degraded into “repressive pseudo-enjoyment,” which is progressively impoverished and lacking in authenticity. He compares the present-day role of mass-media marketing to that of religions in the past. Mass-media commodity images produce “waves of enthusiasm for a given product,” resulting in “moments of fervent exaltation similar to the ecstasies of the convulsions and miracles of the old religious fetishism.”

Debord’s proposal is “to wake up the spectator who has been drugged by spectacular images.” He encourages the use of détournement, which involves using spectacular images and language “to disrupt the flow of the spectacle.” This radical action would encourage “a revolutionary reordering of life, politics, and art” and enable authentic being. JP

1967

Reformed Epistemology

Alvin Plantinga

A philosophy of religion that seeks to defend faith as rational

Reformed epistemology originated in 1967 with the publication of God and Other Minds by analytic philosopher Alvin Plantinga (b. 1932), and was developed by him and others in the 1970s and 1980s. Reformed epistemology contends that belief in God is absolutely right, properly basic and logical, and that faith and belief can stand alone, justified in and of themselves—they are not something one needs to be “argued into.” The term “epistemology” was originally coined in philosophy to refer to the study of the nature, scope, and limitations inherent in knowledge, and describes how what we know relates to our beliefs and notions of truth.

Reformed epistemology—essentially a body of apologetic arguments—is used in the hope of demonstrating how traditional arguments against the existence of God are unreasonable and anti-intellectual. It is used to defend faith in a creator as being entirely rational, and explains how such a faith can be argued in the face of a complete lack of empirical evidence. It is a reaction against both evidentialism (that something can be believed only when there is sufficient evidence) and classic foundationalism (that beliefs can be inferred if drawn from earlier, more basic beliefs). It is also a response to the atheistic beliefs of secular society, challenging that it is simply improper for nonbelievers to assume that it is irrational to believe in God unless one can present evidence of His existence. And it is also a challenge to Christians, reminding them that it is improper to build a faith on the foundations of something as transitory as simply a good argument. BS

“ … belief in other minds and belief in God are in the same epistemological boat …”

Alvin Plantinga, God and Other Minds (1967)

1968

Biblical Rhetorical Criticism

James Muilenburg

Analysis of the stylistic devices used in the Judeo-Christian scriptures

A page from a thirteenth-century copy of the Hebrew Bible, with illustrations by Joseph Asarfati. “Hebrew Bible” is a term used by biblical scholars to refer to the Tanakh.

Biblical rhetorical criticism analyzes the Judeo-Christian scriptures to take into account their literary and theological dimensions. It examines the message of the authors by analyzing the stylistic devices, structures, and techniques they used to compose a narrative that would often be read aloud. Rhetorical criticism looks at the relationship between the text and its original intended audience. By reimagining the cultural setting, it examines how a text was used to achieve a particular effect—whether to inform, educate, persuade, or preach—and also considers the text’s meaning. The critic can then communicate the message of the text to a contemporary audience faithfully.

“What I am interested in, above all, is in understanding the nature of Hebrew literary composition, in exhibiting the structural patterns that are employed for the fashioning of a literary unit, whether in poetry or in prose …”

James Muilenburg, “Form Criticism and Beyond” lecture (1968)

Rhetorical criticism of the Bible can be first seen in the writings of the theologian and philosopher St. Augustine of Hippo (354–430). However, it did not become a formal analytical method until 1968, when U.S. theologian and professor of Hebrew Exegesis and Old Testament at San Francisco Theological Seminary, James Muilenburg (1896–1974), delivered his presidential address, “Form Criticism and Beyond,” at the annual meeting of the Society of Biblical Literature at the University of California in Berkeley. In this lecture, he outlined the limitations of form criticism, which he regarded as too generalized. Muilenburg instead suggested a fresh approach to biblical study that he called “rhetorical criticism,” which would consider what he believed to be the inextricable relationship between the form and content of a text.

Muilenburg’s approach was highly influential, and rhetorical criticism went on to become a discipline. Notably, two of his students have become authorities on this method of biblical criticism: Phyllis Trible, author of works such as God and the Rhetoric of Sexuality (1978) and Rhetorical Criticism: Context, Method, and the Book of Jonah (1994), and Dale Patrick, author of Rhetoric and Biblical Interpretation (1990). CK

1969

The Butterfly Effect

Edward Lorenz

A scientist’s provocative insight into chaos and causality

According to the butterfly effect theory, a butterfly flapping its wings in China can lead to unpredictable changes in U.S. weather a few days later.

U.S. mathematician Edward Lorenz (1917–2008)noticed, as many had before him, that the long-term weather forecasts of meteorologists were rarely accurate. He asked himself why major events such as hurricanes in the Atlantic should be so difficult for scientists to predict. His answer to this question has become known as “the butterfly effect.”

“It used to be thought that the events that changed the world were things like big bombs, maniac politicians, huge earthquakes, or vast population movements … The things that really change the world, according to Chaos theory, are the tiny things.”

Terry Pratchett and Neil Gaiman, Good Omens (1990)

Lorenz imagined a butterfly fluttering around in a rain forest. With each beat of its wings, the butterfly stirs the air, creating a minuscule but measurable change in the atmosphere. According to Lorenz, such a tiny input could make the difference between a storm gathering or not, because weather systems are extremely sensitive to small changes in input. A minute alteration in atmospheric conditions in May could result in a vast difference in the weather experienced in June, by tipping the evolution of the weather system in one direction or another. In a sense, a flap of a butterfly’s wing could trigger a hurricane. And this was what made weather so difficult to predict. Since it was impossible to establish exactly the movements of every butterfly—the “butterfly” standing for any of the vast number of factors of all kinds influencing the system—it was impossible to accurately predict the outcome. Systems such as weather were labeled “chaotic” because, although not random, they could not be reduced to a clear chain of causes and effects.

The butterfly effect is a scientific version of an age-old paradox about trivial causes potentially having disproportionately large effects—as when the seventeenth-century French thinker Blaise Pascal argued that the shape of Cleopatra’s nose may have radically altered the whole course of human history. It has been important scientifically as a stimulus to the development of novel techniques for describing the behavior of chaotic systems. RG

1969

Counterculture

Theodore Roszak

A critical subculture with values and mores subverting those of mainstream society

Young hippies turn their backs on mainstream society at the Isle of Wight Festival in 1969.

The term “counterculture” is attributed to U.S. historian Theodore Roszak (1933–2011), author of The Making of a Counter Culture (1969). Roszak identified common ground between student radicals and hippie dropouts in the 1960s, pointing to their mutual rejection of what he called “the technocracy,” “the regime of corporate and technological expertise that dominates industrial society.” Underpinned by the writings of Herbert Marcuse and Norman O. Brown, Allen Ginsberg, and Paul Goodman, Roszak’s book was read widely by Vietnam War protesters, dropouts, and rebels.

A countercultural movement affects significant numbers of people and expresses the ethos, aspirations, and dreams of a specific population during a well-defined era. When oppositional forces reach critical mass, countercultures can trigger dramatic cultural changes. Examples of countercultural movements in the United States and Europe include Romanticism (1790–1840), Bohemianism (1850–1910), the more fragmentary counterculture of the Beat Generation (1944–64), and the hippie counterculture (1963–75). In each case, a “fringe culture” expanded and grew into a counterculture by defining its own values in opposition to mainstream norms.

“Everything was called into question: family, work, education … sexuality …”

Theodore Roszak, The Making of a Counter Culture (1969)

The lifecycle includes phases of rejection, growth, partial acceptance, then absorption into the mainstream. Eventually each peaks and goes into decline, but leaves a lasting impact on mainstream cultural values. The “cultural shadows” left by the Romantics, Bohemians, Beats, and Hippies remain highly visible in contemporary Western culture. JDP

1969

The Five Stages of Grief

Elisabeth Kübler-Ross

A crucial understanding of the human reaction to the imminence of death

While working with terminally ill patients in Chicago, the Swiss-American psychiatrist Elisabeth Kübler-Ross (1926–2004) became aware that the medical curriculum failed to address dying and death in any meaningful way. An instructor at the University of Chicago’s Pritzker School of Medicine, she developed her ideas on the subject through interviews and a series of seminars, before publishing them in her book On Death and Dying (1969). A best seller, it revolutionized the way people thought about death and care for the dying.

Kübler-Ross’s model proposed five stages of grief, popularly known by the acronym DABDA. After the initial Denial that it is happening, the patient then becomes Angry before attempting to Bargain their way out of trouble by trying to gain a little longer to live. The reality that this is impossible leads to Depression and then to the final stage of Acceptance. Kübler-Ross was always very clear that not everyone will pass through all five stages, nor do so in the same order. Reaction to death is always an intensely personal response.

“The process of grief always includes some qualities of anger.”

Elisabeth Kübler-Ross, On Death and Dying (1969)

At first the five-stage theory was applied only to those facing their own death, but it was soon expanded to other emotional experiences, such as bereavement or the end of a marriage. The wide range of applications to this theory brought it huge acceptance, but also criticism from clinical psychologists who argued that people experiencing loss are resilient, not grieving. Others felt the theory to be prescriptive, pushing people to rush through the stages rather than letting events unfold naturally. But its very simplicity and caring humanity has won the theory numerous admirers. SA

1969

String Theory

Y. Nambu, H. B. Nielsen, and L. Susskind

A theory that seeks to reconcile general relativity with quantum mechanics

These Calabi-Yau manifolds are computer images of the extra six dimensions predicted by string theory.

Physicists like their world to be unified and orderly, without contradictions or exceptions. There was thus a strong desire to unify the two great—and seemingly incompatible—theories of physics that explain the four fundamental forces of nature: the theory of general relativity, which relates gravity to space and time in the four-dimensional entity of spacetime; and the theory of quantum mechanics, which unites the three other forces of electromagnetism, strong nuclear force, and weak nuclear interaction. Work on this unified theory was advanced by Yoichiro Nambu (b. 1921), Holger Bech Nielsen (b. 1941), and Leonard Susskind (b. 1940) in 1969.

For a unified theory to work, all four forces must be described in the same way. This means that gravity, and thus spacetime, must be packaged into discrete entities that can be measured. In comparison with the other three forces, gravity is very weak, implying that these packages must be miniscule. Finding a mathematic model for these packages provided the motivation behind string theory. In 1968 the Italian scientist Gabriele Veneziano (b. 1942) proposed that a model of strings might help. A year later, three scientists working independently of each other made the big leap. Nambu, Neilsen, and Susskind all proposed that Veneziano’s model was actually a theory of strings, that the particles are not single mathematical points but tiny vibrating strings, each fundamental particle vibrating in a different way. At first these strings were thought to be in lines, but it is now realized that they form tiny loops. Crucially, the mathematical description of these loops also described gravity.

String theory, however, requires many more dimensions than the four of spacetime—perhaps ten—rolled up and shrunk inside each other so that only four are visible. Furthermore, there are six different string theories that have been suggested. The search for a unified theory of everything continues. SA

1969

The Frame Problem

John McCarthy and Patrick Hayes

The difficulty of specifying what is left unchanged when an action is performed

Kismet, a robot designed to show facial expressions in response to events around it, showing fear.

In the world of artificial intelligence (AI), the frame problem is the problem of describing the effects of actions or events using logic. It was a problem that first came to the attention of scientists in the AI world in 1969, when two computer scientists, John McCarthy (1927–2011) and Patrick Hayes (b. 1944), published a paper titled Some Philosophical Problems from the Standpoint of Artificial Intelligence.

There are generally agreed to be two main aims of AI: firstly, to engineer objects capable of performing tasks that, if performed by a human, would be said to require intelligence; and secondly, to reach a scientific understanding of the principles underlying such intelligent behavior. The frame problem arose in the early days of AI when even an uncomplicated task, such as moving a block, became complex, because when the block was moved it was necessary to update a vast amount of database information to explain why the block was no longer where it had been and why it had not been altered or changed. Humans are capable of selecting what is relevant (the block has been moved) and ignoring unnecessary complications (why the block is not where it was), but vast amounts of memory were needed to explain this to a computer. The frame problem, which began as little more than a technical annoyance, initially seemed to be almost endemic. Informing a database that an action has been altered fails, in the world of computer logic, to rule out a raft of other possible altered conditions.

In the 1980s a series of workable solutions were arrived at, and the frame problem is no longer thought of as an obstacle—though it has not been completely overcome. Solving it does not mean trying to process every conceivable implication of an action or event, and just as well. The day we are able to program human cognition, and the inherent solutions it will possess, has yet to come. BS

1969

Relational Database

Edgar F. Codd

The organized storage of data in tables that can be linked together

In a world awash with data, the importance of its management is crucial. Computer databases had first appeared in the 1960s, but the early systems were not without flaws. In 1970 a new model of database management was proposed by Edgar F. Codd (1923–2003), a British computer scientist who worked for IBM. He detailed its workings in a report titled “A Relational Model of Data for Large Shared Data Banks.”

Codd’s model worked by organizing data into tables, or “relations,” that represented a single entity—for example, students. The rows of the table, known as tuples, each represented a particular instance of that entity—student A, student B, etc.—while the columns contained data for certain attributes of each tuple—student number, name, address, etc. The relational database system would then allow you automatically to link together different tables by using a particular attribute as a “key,” and so flexibly combine data in two or more of them (for example, the “student” table could be linked to a “course” table by using “student number” as a key; searching the database with a particular student number would then call up all information related to that student number from both tables).

“ … a means of describing data with its natural structure only …”

Edgar F. Codd, “A Relational Model of Data …” (1970)

Users of the database could specify the data it would contain, input that data, and then retrieve information from it later, all without knowing how the database program worked. The software system took care of the storage and retrieval of information. The effect of this was enormous, for Codd had created a system of infinite flexibility. One notable use has been in allocating products and issuing invoices to customers. SA

1969

Ageism

Robert Neil Butler

A form of discrimination based on a person’s age

First coined by physician and Pulitzer Prize-winning author Robert Neil Butler in 1969, the term “ageism” originally applied to negative prejudices people hold against seniors. Today, it is more broadly applicable to any act of discrimination based on a person’s age.

Since prehistoric times, cultures around the world have regarded elderly people as repositories of knowledge and experience, holding them in high esteem. After the invention of the printing press made transmitting information easier and the Industrial Revolution (1760–1840) required workers to have the ability to quickly learn new skills, engage in longer hours of manual labor, as well as be able to move to where new jobs were relocated, the esteem in which old people were once held began to be replaced with negative associations. By 1969, when Butler identified ageism by name, prejudicial attitudes toward elderly people were widespread in many Western nations.

“Aging is not ‘lost you’ but a new stage of opportunity and strength.”

Betty Friedan, author and feminist

Ageism is not universally present, and significant differences in attitudes toward age exist across different cultures. Someone’s age is one of those categories, such as race and gender, that is instantly apparent when we see them for the first time. Those instant observations can bring forth stereotypes that are applied to those categories, and lead to discrimination. Yet unlike other forms of discrimination, ageism is unique in that even though we may not be discriminated against today, we might suffer such treatment at a later time in life. As awareness of the issue has increased, many nations have adopted laws and policies that prohibit, and even criminalize, acts of ageism. MT

1970

Speciesism

Richard D. Ryder

The practice of treating humans as superior to all other animals

Speciesism is the idea that being human gives one superior moral rights to nonhuman animals. Human speciesism is often compared with racism and sexism by animal rights advocates when applied to humans’ disregard for animals’ suffering.

The concept of animal rights has existed for centuries, as suggested by writers such as the English physician and physicist Thomas Young (1773–1829). However, the notion of “speciesism” did not arise until 1970, when British writer and psychologist Richard D. Ryder (b. 1940) used the term in a pamphlet that he distributed in Oxford, England, as part of a protest against experiments on animals.

Ryder received no response to his leaflet, so he reprinted it with an illustration of a chimpanzee experimentally infected with syphilis, and asked a colleague, David Wood (b. 1946), to add his name so that the leaflet would have a university address on it. Ryder sent the reprint to all the Oxford University colleges, and one of the recipients was Australian philosopher Peter Singer (b. 1946). Ryder used the term again in his essay “Experiments on Animals” in the book Animals, Men and Morals: An Inquiry into the Maltreatment of Nonhumans (1971), which Singer reviewed in The New York Review of Books in 1973, describing it as a manifesto of animal liberation. Singer went on to write Animal Liberation (1975), in which he used and popularized the term “speciesism” to describe the exploitative treatment of animals. Animal Liberation is considered by many to be the founding philosophical statement of the animal rights movement. CK

“Animals may be different from us, [but] this does not make them LESS than us.”

Marc Bekoff, Animals Matter (2000)

1970

Urgent Care Clinic

United States

Providing emergency medical treatment at locations other than hospitals

Urgent care clinics are walk-in centers where patients can receive certain limited forms of treatment for illnesses and injuries that require immediate attention, but which are not serious enough to warrant a visit to an emergency room. Unlike hospitals, which are generally open around the clock, urgent care clinics operate only at set times.

The concept of urgent care can be traced back to 1970, with the first clinics set up in the United States by physicians who wanted to relieve some of the burden on overstretched hospitals without reducing the availability of medicine and treatment. Now long established and numbering more than 10,000, these clinics had their powers and responsibilities codified in April 2009 when the Urgent Care Association of America began issuing certificates to practices that satisfied various criteria, including having on-site diagnostic services, such as phlebotomy and X-ray machines, and pharmacies that dispense medicines only to the clinic’s own patients.

“By definition, urgent care centers function as overflow valves for the public …”

Urgent Care Association of America

Urgent care clinics have since been opened in several other countries around the world, including Great Britain, Australia, Canada, Ireland, and Israel. In New Zealand, urgent care has been recognized as a medical specialty in its own right. Although critics point out the dangers of new patients attending the wrong type of facility for their needs through uncertainty about the exact nature of their problem, urgent care clinics have nonetheless carved out their own niche as informal but effective treatment centers. GL

1970

Sexual Politics

Kate Millett

A groundbreaking attack on the subjugation of women

Symbols of women’s oppression are displayed at a Women’s Liberation parade in central London in 1971.

Are the sex scenes in the writings of authors such as D. H. Lawrence, Henry Miller, and Norman Mailer passages of literature or highly charged political acts and representations of the subjugation of women by men? Kate Millett (b. 1934) was in no doubt, and in a revolutionary book, Sexual Politics, first published in 1970, tackled this subjugation head-on.

Millett’s best seller, developed from her Ph.D. thesis, is sometimes said to be the first book of academic feminist literary criticism. Sexual Politics is, in fact, much more than that. Defining politics as power-structured relationships in which one group of people is controlled by another, she focused on “the role which concepts of power and domination play in some contemporary literary descriptions of sexual activity itself.” She then widened her vision to analyze the social relationship between the sexes from a theoretical standpoint, elevating patriarchy into a political institution in its own right. This was groundbreaking stuff, for Lawrence was a then unchallengeable author and Mailer a widely revered modern master. Not every male author was so criticized: she praised Jean Genet, a gay outsider, as “the only living male writer of first-class literary gifts to have transcended the sexual myths of our era.”

“ … sex has a frequently neglected political aspect.”

Kate Millett, Sexual Politics (1972)

Millett’s work produced a storm of protest, notably from Norman Mailer himself. Some feminists were also unsure. But in her attack on patriarchy masquerading as nature and its insidious effects on modern society, Millett produced a key feminist text that contributed to the second wave of feminism during the 1970s. It continues to influence feminist thinking today. SA

1970

Biblical Canonical Criticism

Brevard Childs

Study of the Judeo-Christian scriptures, focusing on the text of the biblical canon

Biblical canonical criticism attempts to reconcile historical and sociological approaches to biblical criticism, approaching the Bible as a theologically valid document and examining the canonical presentation of biblical books and the theological implications of their final form. It is the shape of the scriptures that is important, rather than their content. Canonical criticism refutes the notion that the intention of the author is available and suggests that the text itself, and its position in the canon, is the only source of meaning.

Canonical criticism was pioneered by U.S. theologian Brevard Childs (1923–2007) in Biblical Theology in Crisis (1970), which focused on the text of the biblical canon as a finished product. Childs regarded biblical text as a witness to God and Jesus, rather than merely a source. The U.S. academic James A. Sanders (b. 1927) coined the term “canonical criticism” in his book Torah and Canon (1972), in which he examined why the Torah ends with Deuteronomy rather than Joshua, suggesting that its final canonical shape reinterprets Israel’s story regarding the fulfillment of the idea of the Promised Land. Childs repudiated the term because he felt it described his approach as a historical-critical technique, rather than as a method of reading the Bible as sacred scripture.

“ … the emphasis should fall on the effect which the layering of tradition has had …”

Brevard Childs

Canonical criticism has spread via the work of scholars such as David S. Dockery (b. 1952). Critics argue that the shape of the canon has changed throughout history, so how can scholars decide which one should be used, and how can they be sure that the placing of the books of the Bible is part of its message. CK

c. 1970

Postmodern Music

Various

A genre of music that embraces the absence of one defining structure or ideology, as part of a general reaction to modernism and its scientific approach to artistic creation

John Zorn performs at a concert in 2000. Zorn’s compositions, which juxtapose different styles and genres, have been described as “a sort of musical channel-surfing.”

The term “postmodern” was used sporadically in the late nineteenth century, but did not receive today’s meaning in the arts until architect Joseph Hudnut’s essay “The PostModern House” (1945), in which he regretted some aspects of modern architecture and industrialized manufacturing techniques. As he argued, “We must remind ourselves that techniques have a strictly limited value as elements of expression.” Hudnut instead advocated a spiritual and individualized approach to design and construction, and a harmonious architecture achieved with the help of arts. This understanding of postmodernism as an antithesis of modernism, the perceived science-driven approach to art, would later be complemented by a decentralized and value-relativistic understanding of histories and cultures, and increased skepticism toward technical and economic progress. Postmodernism made its way into musical discourses in the 1970s, as composers such as George Rochberg (1918–2005) and Alfred Schnittke (1934–98) embraced pluralistic approaches, incorporating stylistic elements or direct quotations from the past. Humor, considered antithetical to serious modernism, also became important, as did beauty and naiveté. Other approaches include those of composer and performer John Zorn’s (b. 1953) genre juxtapositions, in which elements of jazz, pop, art music, and film music are contrasted, resulting in a funny and chaotic effect.

“Much of what might be called postmodern music requires of its listeners a certain theoretical sophistication and historical memory.”

Linda Hutcheon, The Politics of Postmodernism (1989)

The “postmodern” label has also been applied retroactively to composers such as Gustav Mahler (1860–1911), thus casting the concept not as a late-twentieth-century musical style, but as an aesthetic approach to tradition. Postmodern works also raise the question of what constitutes modern music: could not works assembled from historical sources sound as new as a modernist work created through mathematical models? PB

c. 1970

Codependency

United States

The dependence on the needs of or control of another

Codependency, a psychological term that first came to prominence in the U.S. state of Minnesota in the 1970s, was originally coined to describe the unhealthy dynamics observed between spouses and family members who are either alcoholic or substance-dependent. Not easy to define, codependency often suffers from having as many definitions as there are experts capable of defining it. Broad in its implications, it was initially viewed as a personality characterized by low self-esteem, intense anxiety in interpersonal relationships, a tendency to go out of one’s way to please others even when it means sacrificing one’s own needs, possessing poorly-defined boundaries, a fear of being abandoned, and having difficulty in communicating feelings and needs. It has, in recent times, grown to include almost anyone with a pattern of dysfunctional relationships in which the needs of others have taken preeminence over the needs of the self. Even addictions are now increasingly being seen as having their roots in codependency, a trend that has led to claims that codependency is in fact the most widespread addiction ever faced by Western culture.

Inherent weaknesses in the psychology of codependent sufferers mean that those diagnosed often find themselves in harmful relationships, often with controlling and manipulative people, making the opportunities for seeking treatment difficult. Codependency is a progressive disorder that quickly can become habitual and does not go away easily. Group therapy, rather than self-help and individual psychotherapy, has statistically been proven to be the most effective recourse, hinging on the identification of failed coping strategies that have frequently persisted since childhood. A sad commentary on the condition can be seen in the joke: “When a codependent dies, someone else’s life flashes before their eyes.” BS

1970

Recycling

United States

The reprocessing of discarded material for reuse

Human beings have recycled ever since they first manufactured tools, but the modern practice of recycling is associated with the first Earth Day—held in the United States on April 22, 1970—which prompted a movement that established thousands of community recycling centers in the United States and elsewhere.

The potential advantages of recycling are both economic and environmental, although it is not always easy to conduct a cost/benefit analysis. It is generally cheaper to reprocess discarded material than to process new material, but the costs of collecting and sorting recyclables are not negligible. Similarly, although recycling reduces the environmental impact of landfills and incineration, the collection and transport of recyclables contributes to air pollution, and toxic recyclables can affect the health of people near recycling sites. Regulation at both the manufacturing and the consumption ends of production can improve the efficiency of recycling: for example, when manufacturers are required to use easily recyclable containers and packaging and when consumers are required to sort their recyclables prior to collection.

“Recycling is a good thing to do. It makes people feel good to do it.”

Barry Commoner, biologist and educator

Recycling is already popular, both with the public and with government and industry. With a growing, urban and industrial world population, recycling will be increasingly important, although it is already feared that it will not be sufficient and that a shift to a closed-loop approach, in which every component of a manufactured product is either recyclable or biodegradable—or a comprehensive shift away from a consumer society—will be needed. GB

c. 1970

Breakdancing

Afrika Bambaataa

An energetic and acrobatic style of street dancing

Breakdancers performing in the streets of Brooklyn, New York, in 1984.

Also known as “breaking,” “b-boying,” and “b-girling,” breakdancing is a form of dance performed to hip-hop music. Breaking consists of several elements: toprock dance steps; downrock, in which a dancer uses their hands for support as well as their feet; power moves, involving acrobatic actions like a head spin; stylish poses to a strong beat in the music called “freezes”; and suicides, whereby a dancer holds a freeze to mark the end of a routine, perhaps by falling to the floor.

Hip-hop culture originated in the Bronx in New York in c. 1970. It was spearheaded by the DJs Afrika Bambaataa (b. 1957), Kool Herc (b. 1955), and Kool Moe Dee (b. 1962), who pioneered breakbeat deejaying. Bambaataa encouraged rival gangs among the African American and Latino communities to challenge each other to dance using acrobatic movements, back flips, and spins, rather than compete against one another by violent means. Bambaataa persuaded young men to leave gang culture behind—instead they became hip-hop aficionados and performed breakdance moves to the music at clubs and in the street.

“When I first learned about the dance in ’77 it was called b-boying …”

Richard “Crazy Legs” Colón, The Freshest Kids (2002)

By the 1980s, hip-hop had entered the mainstream and with it breakdancing, partly thanks to the success of the pioneering breakdancing group Rock Steady Crew. Pop star Michael Jackson incorporated breakdancing into his dance routines and one hundred breakdancers performed at the closing ceremony of the Summer Olympic Games at Los Angeles in 1984. From there, breakdancing spread to Europe, Asia, and Latin America, and it is still a popular form of street dance today. CK

1971

The Original Position

John Rawls

A theory of justice that promotes fairness in lawmaking

The idea that laws result from a social contract between consenting citizens and their government dates back to the Leviathan of English philosopher Thomas Hobbes, published in 1651. It was U.S. philosopher John Rawls (1921–2002) who, in his book A Theory of Justice (1971), amended the thesis to include the idea of fairness.

“The original position is … a status quo in which any agreements reached are fair.”

John Rawls, A Theory of Justice (1971)

A basic social contract requires individuals to consent to be governed because the collective benefits gained by that arrangement exceed the individual freedoms lost. But is that contract fair to all individuals? Might not some people coerce others into agreeing because they are stronger? Rawls thought they could, and that laws might be drawn up that were irrational or partial in their effects. He therefore proposed that such a contract be drawn up in what he called the original position. In this hypothetical position, the negotiators of the contract operate behind “a veil of ignorance,” having no information about the people they are negotiating for. They do not know, for example, what age they are, or gender, or ethnicity, or, crucially, what “conception of the good” each individual has to lead a good life. Behind this veil, the negotiators would then act rationally and impartially to determine two main principles of justice on which the new society would be based. Each citizen would enjoy the same basic liberties while, using the so-called “maximin” rule—maximizing the minimum—social and economic inequalities would be addressed to the benefit of the least advantaged. With his theory, Rawls created a benchmark of fairness against which to test our laws. The implications of “justice as fairness” are considerable. SA

1971

Liberation Theology

Gustavo Gutiérrez

A radical Christian response to poverty, oppression, and injustice

A mural of liberation theologist Oscar Romero in Suchitoto, El Salvador.

Faced with the social injustices they saw in their parishes, a number of Latin American Roman Catholic priests and theologians responded by developing a new interpretation of the teachings of Christ. Prominent among them was the Peruvian priest Gustavo Gutiérrez (b. 1928), whose book, A Theology of Liberation (1971), gave the movement its name.

Latin America in the 1950s and 1960s was a continent disfigured by military dictatorship, gross inequality, and extreme poverty. Formed in 1955, the Latin American Episcopal Conference of Bishops pushed the Roman Catholic Church to be more socially concerned, holding two major conferences to promote their ideas. Priests and theologians began to develop a new, liberation theology that was based on two principles. First, it recognized the need for liberation from all forms of oppression, be they political, economic, social, racial, or sexual. Second, this new theology should grow out of the communities it represented, not be imposed by the Church from above. Solidarity with the poor would transform society, so every aspect of the Church and its teachings would be viewed from the angle of the poor.

“[The] meaning of liberation is … a question about the very meaning of Christianity …”

Gustavo Gutiérrez, A Theology of Liberation (1971)

In its portrayal of Christ as a political figure, liberation theology came under attack from conservatives within the Catholic Church. In 1979 Pope John Paul II cautioned against seeing Christ as a revolutionary, while in 1983 the Congregation for the Doctrine of the Faith criticized the theology as Marxist. Its main leaders ignored such criticisms, although at some cost. In 1980, the El Salvador archbishop Oscar Romero was assassinated while saying mass in a hospice. SA

1971

Desert Versus Merit

John Rawls

The debate over what constitutes a fair basis for distributing rewards

The argument for distribution according to “desert” relies on thinking that distribution according to “merit” would involve injustice. It appeals to the notion that just distributions cannot be based on factors over which the recipients have no control. According to U.S. philosopher John Rawls (1921–2002), in his work A Theory of Justice (1971), the distribution of benefits according to race or gender, for example, seems unjust since neither factor is within a person’s control. But what if the alleged bases for desert are also largely outside of a person’s control? It would seem to follow that distributions based on guaranteed success are also unjust.

Claiming that individuals should be rewarded for factors under their control, but not for factors outside their control, Louis Pojman (1935–2005) argued that desert is a species of merit that entails responsibility. According to this argument, individuals are free to choose their effort and thus may be held responsible for this choice, and so, on these grounds, should be rewarded for the strength of that effort. The position assumes that effort is within individual control and talent is beyond it, and that “effort” is clearly distinguishable from other factors and is measurable.

“Justice is a constant and perpetual will to give every man his due.”

Louis Pojman, “Equality and Desert” (1997)

Joel Feinberg (1926–2004) called such meritorious qualities as intelligence, native athletic ability, and good upbringing “the bases of desert,” meaning that while we may not deserve these traits, they can form the basis for reward. That is, while you may not deserve your superior intelligence or tendency to work hard, you do deserve the high grade on your essay, which is a product of your intelligence and effort. JP

1972

Gaia Hypothesis

James Lovelock

A new view of Earth and how conditions for life are sustained

James Lovelock stands in front of a statue of the Greek goddess of the Earth, Gaia, after whom his well-known theory is named, in August 1980.

It is generally always assumed that Earth is a solid object on which life flourishes. But what if Earth itself is alive, an organism in its own right? Such an insight occurred to James Lovelock (b. 1919), an independent British scientist, while looking for life somewhat farther afield.

“The Earth System behaves as a single, self-regulating system with physical, chemical, biological, and human components.”

The Amsterdam Declaration of the International Humanist and Ethical Union (2001)

In September 1965 Lovelock was working at the Jet Propulsion Laboratory in California on methods for detecting life on Mars, when he suddenly realized that there was no need to visit the planet to find the answer. Since life on Earth is responsible for its atmosphere, investigating the Martian atmosphere—or lack of it—through an infrared telescope would determine whether there was indeed life on the Red Planet. From this understanding, emerged the idea of Earth as a “living planet” and, in 1972, Lovelock’s hypothesis that Earth is a single and self-regulating organism capable of maintaining life. The hypothesis—named Gaia after the ancient Greek personification of Earth (suggested by Lovelock’s then neighbor, the novelist William Golding)—proposes that Earth is controlled by its community of interacting, living organisms in a self-regulating system. It looks after itself through the evolution of its biosphere, atmosphere, hydrosphere, and pedosphere. This system regulates salinity in the oceans and oxygen in the atmosphere. It achieves an environment that is optimal for life and is changed by that life. That does not mean that Gaia will necessarily look after and support human life. To the contrary, “Gaia will look after herself. And the best way for her to do that might well be to get rid of us,” Lovelock warned in 1987.

From this hypothesis derives an entire philosophy of Earth as a living planet that has profound implications for the environment and the future of life itself. The full implications of this revolutionary hypothesis have yet to be explored. SA

1972

Gene Therapy

Theodore Friedmann and Richard Roblin

The use of DNA as a pharmaceutical agent to treat disease

A computer artwork of a strand of DNA (deoxyribonucleic acid, red) being carried by a virus (blue), illustrating the concept of gene therapy using viruses.

Gene therapy is the treatment of diseases or disorders, especially those caused by genetic abnormalities, by the manipulation of genetic material. The term first came into wide use in the late 1960s and early 1970s, as various researchers began to investigate the possibility of altering DNA to treat disease. In 1972, Theodore Friedmann and Richard Roblin wrote an influential article titled “Gene Therapy for Human Genetic Disease?”, which lay out the various technical and ethical challenges to be overcome before gene therapy could be considered a workable option for treating human disease.

“[We] feel that the irreversible and heritable nature of gene therapy means that the tolerable margin of risk and uncertainty in such therapy is reduced.”

Theodore Friedmann and Richard Roblin, “Gene Therapy for Human Genetic Disease?” (1972)

The foundations for gene therapy were laid in the 1960s, as researchers began to make progress in understanding how genetic material, such as DNA and RNA, functioned at a molecular level. The idea that manipulating genetic material could cure disease was a natural one, and, by the time Friedmann and Roblin were writing in 1972, specific “vectors,” such as the use of viruses with altered DNA, for changing a patient’s DNA had already been proposed. Friedmann and Roblin argued that gene therapy would not be viable until researchers could describe with precision the effects of changing DNA in specific ways, and could develop vectors for doing this. The first experimental gene therapy treatment was approved by the U.S. Food and Drug Administration in 1990, for Ashanti DeSilva, a four-year-old suffering from an immune disorder.

Gene therapy has considerable promise for treating conditions such as cancer, cardiovascular disease, and autoimmune disorders. So far, however, it has been largely constrained to experimental trials, and has not yet led to the creation of widely available drugs or treatments. This may well change soon, as researchers continue to make progress toward developing safe, efficient, and reliable vectors for delivering gene therapy. BSh

1972

Ways of Seeing

John Berger

A study of the hidden ideologies that affect how we view art

The Grande Odalisque (1814) by Ingres was used by Berger to examine the portrayal of women in art.

When we look at an old master painting, what is it we actually see? Are we looking at and understanding some classical scene or female nude, or are we viewing the painting through modern eyes? If the latter, what difference does that make? In John Berger’s (b. 1926) view, it mattered greatly. In his four-part BBC television series Ways of Seeing, created with producer Mike Dibb (b. 1940) and first broadcast in 1972, and then in the succeeding book of the same name, he argued cogently that hidden ideologies affect how we view art.

John Berger is an English novelist and critic whose Marxist humanism informs his work: when he won the Booker Prize in 1972 for his novel G., he donated half his winnings to the Black Panther Party. The ideas behind Ways of Seeing came from Walter Benjamin’s (1892–1940) essay “The Work of Art in the Age of Mechanical Reproduction,” published in 1936. Benjamin described a theory of art that would be “useful for the formulation of revolutionary demands in the politics of art,” that art in the age of mechanical reproduction should be based on how politics was practiced. Berger agreed, arguing in the first program that constant reproduction of old masters has severed them from the context in which they were created. In discussing the female nude, he argued that only twenty or thirty such paintings depicted the woman herself, as opposed to a subject of male desire or idealization. He considered that the use of oil paint reflected the status of the patron who commissioned the painting and argued that color photography had taken over the role of painting.

Berger was deliberately polemical but his impact has been enormous. In criticizing traditional Western aesthetics he opened the door to feminist and other critics who read paintings with regard to the present day, rather than the purposes for which they were originally produced. SA

1972

Gross National Happiness

King Jigme Singye Wangchuck

A holistic approach to assessing a country’s social and economic progress

King Jigme Singye Wangchuck of Bhutan (center) and attendants, pictured in 1976.

Gross national happiness is a nonquantitative measure of the overall quality of life of a nation’s citizens that takes into account both economic and noneconomic criteria. All other things being equal, a nation with a high gross national happiness is more “developed” than a nation with a low gross national happiness. The concept was first articulated in 1972 by the king of Bhutan, Jigme Singye Wangchuck (b. 1955), and the government of Bhutan has used it ever since to assess their citizens’ wellbeing and to analyze the effects of government policies.

The concept of gross national happiness was developed in response to the perceived shortcomings of purely economic measures of development, such as gross domestic product (GDP). In addition to economic criteria, a country’s gross national happiness depends on factors such as the conservation of cultural heritage, equitable development, fair governance, and preservation of the natural environment. Proponents of gross national happiness often point out that once citizens’ basic needs are met, further increases in income make relatively small differences to how happy citizens are. Moreover, there are certain ways of increasing GDP (such as exploiting natural resources) that may actually leave citizens worse off, and thus decrease gross national happiness.

The basic intuition behind gross national happiness—that wellbeing involves more than economic success, and that GDP is (at least by itself) an inadequate measure of citizens’ quality of life—has seen increasing acceptance in recent years, especially among those interested in promoting sustainable development. More quantitative measures of wellbeing, such as the Index of Sustainable Economic Welfare (ISEW) and the Genuine Progress Indicator (GPI), can be seen as variations on this same basic idea. BSh

1972

Punctuated Equilibrium

Niles Eldredge and Stephen Jay Gould

The theory that species evolve though periods of rapid change in between long periods of limited or no change

The study of trilobite fossils was instrumental to the formulation of Eldredge and Gould’s theory of punctuated equilibrium as a mechanism of evolution.

Species do not evolve through regular changes that occur gradually over time, according to the theory of punctuated equilibrium, but rather they arise through periodic episodes of rapid change. Those rapid interludes of change can arise for a variety of different reasons, but reactions to sudden changes in the environment or drastically advantageous mutations that naturally grant a species a significant edge are common causes.

“Two outstanding facts of the fossil record—geologically ‘sudden’ origin of new species and failure to change thereafter (stasis)—reflect the predictions of evolutionary theory, not the imperfections of the fossil record.”

Stephen Jay Gould

Ever since Charles Darwin published On the Origin of Species in 1859, modern biology has largely been predicated on the theory that species gradually change over time. However, in 1972, U.S. paleontologists Niles Eldredge (b. 1943) and Stephen Jay Gould (1941–2002) published a paper titled “Punctuated equilibria: an alternative to phyletic gradualism,” in which they proposed that the gradual model was incorrect, or at least incomplete. The two argued that the fossil record did not support the gradual change view, but instead showed that species tend to remain in the same state, known as stasis, for long periods of time. Evolutionary change occurred only in, geologically speaking, brief flashes of time.

Punctuated equilibrium proposes that at any given point in a species’s history there will be little evidence for change. The evidence found in the fossil record has supported this claim, but it has also showed the opposite: that some species evolve gradually. The theory’s influence is still somewhat controversial, with some believing it to be the dominant method of evolutionary change, and others believing that it is the exception to the rule of gradual change. Since its introduction, however, punctuated equilibrium has influenced other fields, such as linguistics and geopolitical interactions, offering an alternate view to the idea of steady progress and adaptation. MT

1972

The Limits to Growth

D. H. Meadows, D. L. Meadows, and others

A prediction of the exhaustion of the Earth’s finite resources

It is now commonplace to discuss the world’s waning natural resources—the fact that the Earth is running out of oil, rare minerals, and even water, for example. A mere forty or so years ago, such an idea was unthinkable, as economic growth was assumed to be inexorable. This change in thinking is due largely to The Limits to Growth (1972), a piece of work commissioned by the Club of Rome global economic think tank. With its detailed computer modeling, mathematical formulas, and dry text, The Limits to Growth made the apocalyptic statement that unchecked economic and population growth was harming the planet.

The authors of the book devised a computer model known as World3, which simulated the consequences of interactions between the Earth’s resources and human exploitation. Five main variables were examined, covering world population, food production, industrialization, depletion of resources, and pollution. The model assumed that growth would be exponential but that the ability of technology to increase the availability of resources grew only linearly. Altering the growth trends in the five variables under three different scenarios produced an “overshoot and collapse” of the global economic system by the mid-to late twenty-first century in the first two scenarios, while the third resulted in a stabilized world.

Needless to say, the report attracted great interest and much criticism. Many economists attacked its methodology and rhetoric, arguing that technology could solve all problems providing growth continued. If it stopped, billions would be consigned to perpetual poverty. Yet twenty-and thirty-year updates by the original authors have confirmed their predictions, while others analyses carried out in Australia and elsewhere confirm the original report. There are indeed limits to growth. SA

1973

Anthropic Principle

Brandon Carter

Observations of the universe must be compatible with the life-forms that observe it

The anthropic principle holds that the existence of intelligent observers, such as humans, provides evidence for physical facts about the universe, such as the precise values of the fundamental forces and the distribution of mass and energy. The concept was first posited by physicist Brandon Carter (b. 1942) in 1973.

Carter proposed the anthropic principle as a response to the “Copernican Principle,” which states that the Earth (and the humans that live on it) does not occupy a privileged place in the universe. While he agreed with this principle, he noted that the existence of observers is privileged in a different sense: trivially, observers should always expect to find that the universe they observe is compatible with their own existence. Carter called this the weak anthropic principle. He also formulated, but did not endorse, a strong anthropic principle according to which the universe itself must be structured so as to admit the existence of intelligent observers.

“I prefer to work with the simplest hypothesis compatible with the observational evidence.”

Brandon Carter

Scientists and philosophers have applied versions of the anthropic principle to a variety of problems. For example, the principle is often brought up in discussions of “fine tuning” arguments for God’s existence. Proponents of these arguments claim that the orderly nature of the universe, and the existence of intelligent life that this order allows, provides evidence of God’s existence. In response, some critics have claimed that the weak anthropic principle can explain the orderly nature of the world equally well—if the universe were not “orderly,” intelligent observers would simply not exist. BSh

1973

Small Is Beautiful

E. F. Schumacher

An argument advocating a change of scale in economic activity

Small Is Beautiful: A Study of Economics as if People Mattered (1973) is a collection of essays by British economist E. F. Schumacher (1911–77). In the book, Schumacher criticizes several tenets of conventional economic thinking, particularly the assumption that increased economic productivity will increase human welfare.

Schumacher wrote Small Is Beautiful in response to what he felt was a false but widespread view: namely, that the best way to address problems such as poverty, starvation, and war was to increase the amount of goods and services that were being produced. Schumacher emphasized the frequently ignored “costs” of increased economic productivity, such as the consumption of finite natural resources, the pollution of the environment, and the happiness of workers who are made redundant by the deployment of advanced technology. As an alternative to maximizing productivity, Schumacher argued that we ought to focus our efforts on lowering consumption and on developing so-called “intermediate technology,” which allows people (especially in developing countries) the opportunity to engage in meaningful work.

“ … if we squander the capital represented by living nature … we threaten life itself.”

E. F. Schumacher, Small Is Beautiful (1973)

Schumacher’s arguments in Small Is Beautiful have proved to be enormously influential, especially in relation to sustainable development, environmental ethics, and organizational theory. In particular, many scholars and policy makers have been influenced by his contention that things such as fossil fuels, the environment’s tolerance for pollution, and workers’ wellbeing are properly viewed as capital to be preserved, and should not be treated as income to be spent. BSh

1973

Strong Force

D. Gross, F. Wilczek, and D. Politzer

A radical insight into the force that binds the smallest particles of matter

An image of a three-jet event, which provides evidence of gluons, the carrier particles of strong force.

The four fundamental forces of physics are gravity, electromagnetism, and the weak and strong forces. The first two were discovered ahead of the others because they can be readily seen at work in the visible world; strong (like weak) force operates at a subatomic level, and was not known about until the twentieth century.

Scientists began hypothesizing the presence of a strong force in the mid-1930s, when they learned that atoms are made up of electrons orbiting a nucleus of protons and neutrons. Theorizing that electromagnetic repulsion between the protons should blow the nucleus apart, they posited the existence of another force many times stronger than electromagnetism to glue the nucleus together. In the 1960s, with the discovery of quarks (the subatomic particles that make up the protons and neutrons), the argument shifted to how quarks stick together. In 1973, David Gross (b. 1941), Frank Wilczek (b. 1951), and David Politzer (b. 1949) published two papers solving a key part of the mystery. They realized that, unlike other forces, strong force appears to become stronger with distance; the force between two quarks in close proximity is so weak that they behave as if they are free particles, but as they move apart the force between them increases as if they are connected by an elastic band. The team won the Nobel Prize for Physics for this discovery in 2004.

“ … without [strong force] every atom in the universe would spontaneously explode …”

David Evans, physicist

Strong force may only act within a minuscule space, but it is now understood to be the most powerful of all forces. In fact, at these distances, scientists calculate that it is one hundred times greater than the next strongest force, electromagnetism. JH

1973

Stockholm Syndrome

Nils Bejerot

The tendency of a captive to identify with, bond, or sympathize with their captor

Patty Hearst, who joined the left-wing guerrilla group that kidnapped her, in a photograph from 1974.

Stockholm syndrome (also known as terror bonding or traumatic bonding) is a complex psychological reaction experienced by persons in frightening situations. It has been identified in hostages, survivors of World War II concentration camps, members of religious cults, battered wives, and abused children. The term takes its name from a bank robbery that took place in Sweden in August 1973; robbers took three bank employees hostage for 131 hours. After their release, the employees seemed to have formed a paradoxical emotional bond with their captors. When freed, the victims hugged and kissed their captors and declared their loyalty to the kidnappers. The syndrome was named by Nils Bejerot (1921–88), who served as a psychiatric consultant to the Swedish police during the hostage standoff.

“It’s hard to hate someone once you understand them.”

Lucy Christopher, Stolen: A Letter to My Captor (2009)

There are no widely accepted diagnostic criteria for the Stockholm syndrome. When newspaper heiress Patty Hearst was abducted by the radical political group, the Symbionese Liberation Army (SLA), in 1974, she became their accomplice, adopted an assumed name, and abetted the SLA in a bank robbery. However, an FBI report has termed such close victim-captor relationships “overemphasized, over-analyzed, over-psychologized, and over-publicized.”

Study of Stockholm syndrome has influenced how law enforcement agencies handle hostage situations; crisis negotiators may try to encourage captor-hostage bonding by telling perpetrators about the victims’ families or personal lives. Being viewed as a fellow human being, the theory holds, may be a victim’s best hope for staying alive. BC

1973

CCTV for Crime Prevention

United Kingdom and the United States

The use of cameras to produce images and recordings for crime surveillance purposes

Closed-circuit television (CCTV) works by sending a television signal to a limited number of predetermined monitors. A CCTV signal may be monitored by an onsite or remotely located security guard or police agency, or it may simply be recorded. CCTV is most commonly used to monitor high crime areas, such as parking lots, retail shops, airport terminals, and city centers.

The CCTV concept dates back to the 1930s, and by the 1950s it was being used to observe dangerous experiments, to deliver remote lectures, and to view live sporting events. However, it was not widely employed in crime prevention until the 1970s, when improved technology and reduced costs made it practical to deploy the technology on a large scale. In 1973, the British Home Office published an influential report that considered the possible benefits and drawbacks of CCTV as a deterrent against retail theft. In the years since this report, CCTV for crime prevention has been widely deployed in both the United Kingdom and other countries.

“You [are] watched by a ‘Teletector Synchroscan,’ a caged [CCTV] camera.”

Jack Rosenthal, Life magazine (July 11, 1969)

Advocates of CCTV argue that it is an effective method of situational crime prevention, insofar as it prevents crime by changing the physical environment. While studies have shown that CCTV is effective at reducing crime in certain contexts, some critics have alleged that it merely relocates crime, but does not prevent it. Others have argued that the widespread use of CCTV violates the privacy of those it records. The use of CCTV for crime prevention is likely to continue, especially if new technologies (such as facial recognition software) increase its effectiveness. BSh

1973

Neoconservatism

Michael Harrington

A U.S. political ideology that merges traditional conservatism with political individualism

Jeane Kirkpatrick, an advocate of neoconservative foreign policy, speaking on U.S. television in 2003. As U.N. ambassador she played a prominent role in the Reagan administration.

The term “neoconservative” was first used in 1973 by U.S. democratic socialist leader and political theorist Michael Harrington (1928–89) to describe a group of former left-wing liberals who had adopted a variety of politically conservative views. The term has also been used to describe the variety of conservative thought defended by these thinkers, especially as it relates to foreign policy.

“[A neoconservative is a] liberal who has been mugged by reality.”

Irving Kristol, journalist

Some of the thinkers identified by Harrington as neoconservatives included the editor and journalist Irving Kristol, the sociologists Nathan Glazer and Daniel Bell, and the Democratic politician Daniel Patrick Moynihan. While these thinkers shared many of the values of the “New Left” of the 1960s, they differed in two significant ways. First, they advocated an interventionist U.S. foreign policy aimed at containing communism and promoting democracy. Second, they argued that the policies favored by the New Left (such as affirmative action, welfare, and changes to education policy) were unlikely to achieve their stated goals, and would instead have significant negative side effects. These commitments led some, but not all, of the neoconservatives to leave the Democratic Party and join the Republican Party.

Today, the term “neoconservative” is generally used to describe conservatives, such as author Charles Krauthammer and former Deputy Secretary of Defense Paul Wolfowitz, who advocate interventionist foreign policies aimed at promoting democracy. Neoconservativism is closely associated with ideas such as democratic peace theory, which holds that democracies will rarely or never go to war with each other, and with the thought of political philosopher Leo Strauss, who argued that the success of Western political institutions (such as representative democracy) depended crucially on many other aspects of Western culture. BSh

1973

Global Positioning System

Bradford Parkinson

A global navigation system using satellites to determine location, speed, and time

An artist’s illustration of a Global Positioning System satellite in space. The user’s position is worked out by calculating the time it takes for radio signals to reach the receiver from the satellite.

The Global Positioning System (GPS) is a global navigation system that uses Earth-orbiting satellites to determine location, speed, and time. The system consists of three segments: the space segment is the group of satellites themselves; the control segment involves ground-based tracking stations; and the user segment encompasses all those who use the system and the devices that receive the satellite signals. The abbreviation GPS is commonly used to refer to these receivers themselves, as they have become a standard feature in cell phones and cars. While the development of GPS was a collaborative effort, it was conceived by aeronautical engineer and U.S. Air Force officer Bradford Parkinson (b. 1935) in 1973.

“ … anyone, anywhere in the world, can almost instantaneously determine his or her location in three dimensions to about the width of a street.”

Bradford Parkinson

The concept of a satellite navigation system emerged from a similar ground-based system that used radio signals between beacons to determine the position of ships and aircraft. Developed in the 1940s, many of these systems were used for military purposes, and the GPS was originally conceived as a missile guidance system. The first satellite was launched in 1978 and experimental military use of the system continued as more and more satellites were added. Put to major use during the Gulf War in 1990 to 1991, the system became fully operational in 1995 and was opened to civilian use the following year.

The GPS began with a constellation of twenty-four satellites, but by 2012 upgrades and expansions had taken that number to thirty-two. As these developments have improved the system’s accuracy, its military applications have also expanded to enhance the performance of precision-guided weapons and to locate and monitor troop movements. The system’s capacity to determine precise locations has been an invaluable aid for police and emergency services, and this extraordinary accuracy has made the GPS an indispensable asset that has many other applications. TJ

1974

What Is it Like to Be a Bat?

Thomas Nagel

Another being’s mental state can only be understood by experiencing it

Two fruit bats. According to Thomas Nagel, we will never be able to comprehend their experience of life.

According to philosopher Thomas Nagel (b. 1937), even if we can objectively describe all the physical qualities of existence, such as how a bat uses sonar, we will never be able to completely understand how it feels for the bat to be a bat because we are not bats. What is it like to be a bat? We can never truly know, because consciousness is more than just the physical process of sensation.

Nagel published his paper “What is it like to be a bat?” in 1974, largely in response to what he termed the “reductionist euphoria” that was dominant in the field at the time. Nagel offered his bat argument as a counter to the reductionist position that all mental phenomena, such as consciousness, could be explained in purely physical terms. Bats are so foreign to a human’s experience, Hagel argued, that it is impossible for us to understand their concept of the world. Even with our grasp of the physical reality, our understanding of their personal reality is necessarily compromised if we lack firsthand experience of what it is to live as a bat lives.

“ … if I try to imagine this, I am restricted to the resources of my own mind …”

Thomas Nagel, “What is it like to be a bat?” (1974)

Hagel’s question gets to the heart of the experiential, subjective nature of consciousness, even if it is one that we can never really answer. For someone who has been completely blind since birth, does the word “color” have the same meaning as it does to the person who sees? How could it? And how could a person with sight ever understand what it is like to have always been without it? For Hagel, it is impossible, or nearly so, and is the reason why consciousness is more than an objective understanding. MT

1974

Lifeboat Earth

Garrett Hardin

Rich nations should not risk their own safety by giving their finite resources to poor ones

Assume, for a moment, that after a terrible nautical calamity you and a small group of survivors have found yourself on a lifeboat in the middle of the ocean. Around you, in the water, are hundreds of other survivors begging to be let aboard. If you allow too many survivors onto your lifeboat, it will capsize and sink, dooming everyone. Do you allow any more people on? How many? And how do you choose who lives and who dies? Now imagine that the lifeboat is our planet, the people in the water are poor, developing nations, and the people in the lifeboat are rich nations. According to U.S. ecologist Garrett Hardin (1915–2003), this is why rich nations should not help poor nations.

The “lifeboat Earth” argument first appeared in Hardin’s essay titled “Lifeboat Ethics: the Case Against Helping the Poor” (1974). He wrote it, in part, to counter the popularized notion that the Earth was a spaceship whose inhabitants relied upon it entirely for their survival. Those inhabitants, according to the “spaceship Earth” argument, had an equal duty to share and protect the ship. Hardin disagreed. The Earth was not a spaceship, he said, but a group of nations. Some nations possessed many resources while others had few, and the wealthy have no duty to assist the poor.

“Ruin is the destination toward which all men rush …”

Garrett Hardin, “The Tragedy of the Commons” (1968)

Hardin’s essay puts everything from ethics to ecology, economics, and geopolitics under a stark canopy that has gained both critics and supporters. In its calculations, lifeboat Earth says that humans are not, by nature or effect, intrinsically equal. Some lives are worth more than others, and allowing everyone a chance at living merely guarantees a quicker end for all. MT

1974

The Standard Model of Particle Physics

Sheldon Glashow and Murray Gell-Mann

A comprehensive theory that attempts to explain and predict the interactions of fundamental subatomic particles

Sheldon Glashow and Mohammad Abdus Salam, pictured shortly before receiving the Nobel Prize for Physics 1979. They were honored for their work on electroweak theory.

What is the universe made of? The discovery of a single, unified theory of everything that answers that question is the ultimate goal of theoretical physics. So far, however, it has proved impossible to find one that accounts for the behavior of gravity. Nonetheless, since the 1970s, a theory that explains everything else has evolved: the Standard Model of Particle Physics.

“ … new particles were being found that didn’t seem to fit into any well-recognized pattern, and one had no idea of what was going on until the concept of the standard model developed.”

Sheldon Glashow

The term “Standard Model” was first used at a conference in 1974 to summarize the combined research of many physicists. However, the model itself developed as a unification of the electroweak theory of Sheldon Glashow (b. 1932) and the strong nuclear force theory of quantum chromodynamics, which was pioneered by Murray Gell-Mann (b. 1929).

According to the Standard Model, all physical phenomena can ultimately be reduced to the interactions of two types of subatomic particles—quarks and leptons (both of which have six different varieties). These interactions are governed by three of the four fundamental forces of nature: the strong nuclear force, the weak nuclear force, and electromagnetism. The forces themselves are carried by their own intermediary particles known as bosons. This framework provides an explanation for the behavior of all matter and force in the universe (apart from gravity).

Much of the Standard Model’s acceptance is due to the success of its predictions about the intermediary or “force-carrier” particles. While the photon had long been known as the force carrier for electromagnetism, observations later confirmed the model’s predictions of the gluon as the particle at work in the strong nuclear force, and the exchange of W and Z bosons in the weak force. In 2012, experiments showed evidence for the existence of the Higgs boson. Although a transient particle, it is crucial to the Standard Model as it provides a means to explain how other particles acquire mass. TJ

1974

Genetic Engineering

Rudolf Jaenisch

The manipulation of an organism’s genome using biotechnology

Humans have been tampering with the gene pool of animals for thousands of years through the process of selective breeding. However, genetic engineering as we understand it today—the direct intervention and manipulation of DNA—only started in earnest in the 1970s.

Molecular cloning—the bringing together of genetic material from various sources—was first achieved by the U.S. biochemist Paul Berg in 1972, and the first genetically modified organisms were engineered in 1973. However, it was in 1974, when the pioneering biologist Rudolf Jaenisch (b. 1942) injected a DNA virus into a mouse embryo and created the world’s first transgenic mammal, that it can be said the “Brave New World” of genetic engineering finally became an actual, physical reality. Genetic engineering was applied to tobacco plants in 1986 to make them resistant to herbicides, and in 1994 tomatoes became the world’s first genetically modified crop. Transgenic, genetically modified crops, despite ethical and safety concerns, are now grown in more than thirty countries and are the subject of a vigorous debate between proponents of the new technology and those who designate it “Frankenstein food.”

The term “genetic engineering” was actually first used in the science fiction novel Dragon’s Island, by the U.S. author and “dean of sci-fi” Jack Williamson in 1951. This was just one year prior to the well-known Hershey–Chase experiments by geneticists Alfred Hershey and Martha Chase, which proved DNA to be the conveyor of hereditary genetic material, previously thought to have been carried by proteins. When molecular biologists James D. Watson and Francis Crick gave the world its first glimpse of our DNA’s beautiful double-helix structure in 1953, the path to genetic engineering was well and truly defined. BS

1974

Infotainment

U.S. broadcasters

News items that might be better classed as human interest stories

A news report gives information. A human interest story, such as a day in the life of a celebrity, provides entertainment. Traditionally, the two were separated in particular sections of a newspaper or on different radio and television programs. Today the distinction is blurred, giving rise to this portmanteau word for any program, website, or feature that contains something of both.

The variant word “infotainment” was first coined in 1974 as the title for a conference of the Intercollegiate Broadcasting System. The popularity of the term attests its usefulness as a description, but there is less certainty about that of the phenomenon it describes. Critics think that infotainment contains too little of the word’s first two syllables. For example, in February of 2004 a CNN bulletin led with news of singer Janet Jackson exposing her breast at the Super Bowl; the second item was the interception of a letter containing ricin addressed to the U.S. Senate majority leader. Defenders of this running order claimed that the nudity was of greater popular interest than a failed assassination; detractors regarded it as “dumbing down,” which emphasizes trivial issues and sidelines important ones.

“Television is increasingly … ‘infotainment,’ not a credible source of information.”

Richard Edelman, CEO of Edelman PR company

Although newsworthiness is subjective, there are certain criteria that have universal application: a hard news story should be serious and timely; with a soft news story there is no trigger event that demands it be reported immediately. Media analysts debate the merits of infotainment, but now that news is available online, on television, radio, and in print, broadcasters are under unprecedented pressure to catch audiences and hold onto them. GL

1974

The Lives of a Cell

Lewis Thomas

The first popular science book suggests that the Earth is best understood as a cell

In The Lives of a Cell: Notes of a Biology Watcher (1974), a book written by U.S. immunologist, pathologist, and medical educator Lewis Thomas (1913–93), the author suggests that the Earth is perhaps best understood as a single cell. In a holistic conception of nature, he examines the relationship of the human race to nature and the universe, outlining the complex interdependence of all things. Thomas dismisses the idea of the self as a myth, suggesting that the human race is embedded in nature. He also suggests that society and language behave like organisms.

Furthermore, Thomas proposed that viruses, rather than being “single-minded agents of disease and death” are more like “mobile genes,” which have played a key role in the evolution of species because of their ability to move pieces of DNA from one individual, or species, to another.

“I have been trying to think of the Earth as a kind of organism, but it is no go …”

Lewis Thomas, The Lives of a Cell (1974)

The idea of the Earth as a single cell was formulated by Thomas in a collection of essays that were first published in the New England Journal of Medicine. Thomas’s essays provide a basic foundation of biology explained from a philosophical and also a scientific perspective. He wrote in a poetic style that is rich in simile and metaphor, and which makes concepts easy to understand. The volume became a best seller and won a National Book Award in 1975.

The Lives of a Cell spearheaded a genre of writing devoted to popular science. It became a standard text studied in U.S. schools, colleges, and universities, and is still used as a text in contemporary times to help students develop a deeper understanding of biology. CK

1975

Male Gaze

Laura Mulvey

A feminist theory analyzing how women can be objectified in visual culture

Director Alfred Hitchcock sneaks a peek on set during the filming of Vertigo in 1958.

The feminist theory of the “male gaze” is applied to films, photographs, paintings, and advertisements in order to analyze how women are represented. It was originally conceived solely in regard to Hollywood movies. Film theorist Laura Mulvey (b. 1941) coined the term in 1975 in her essay “Visual Pleasure and Narrative Cinema” in the journal Screen. Her essay remains one of the most widely cited articles in film theory.

Psychoanalytic theory is instrumental to Mulvey’s original concept of the male gaze. Referring to Sigmund Freud’s notion of the pleasure involved in looking, or scopophilia, she argued, “In a world ordered by sexual imbalance, pleasure in looking has been split between the active/male and the passive/female. The determining male gaze projects its fantasy onto the female form, which is styled accordingly.” In Hollywood, where movies are made by and for men, Mulvey identified two roles for women: erotic object for the characters within the film and erotic object for the audience. Alfred Hitchcock’s work was of particular interest because the audience and the film’s protagonist are often embroiled in the same scopophilic experience, as she wrote, “In Vertigo (1958) … in Marnie (1964), and Rear Window (1954), the ‘look’ is central to the plot oscillating between voyeurism and fetishistic fascination.”

“The gaze is male whenever it directs itself at, and takes pleasure in, women.”

Laura Mulvey

In the early 1970s, feminists called for a revolutionary look at cinema: to turn it from an instrument of male gaze to female gaze. Some felt that the only way to change the way women were represented was to put theory into practice. This led Mulvey and other feminists to make their own movies. JH

1975

Fractals

Benoit Mandelbrot

Fractals and the concept of self-similarity take us into the world of the very small

A computer-generated image of a Mandelbrot fractal, derived from a Mandelbrot set—a group of complex numbers plotted using their real and imaginary parts as coordinates.

Difficult to fully explain even for mathematicians, fractals are objects or quantities that are self-similar, with the whole possessing the same shape as one or more of its constituent parts. There are no previously hidden, alternate structures that only appear under extreme magnification; the pattern of the larger object continues to copy itself down to the smallest observable particle—and beyond.

“If we talk about impact inside mathematics, and applications in the sciences, Mandelbrot is one of the most important figures of the last fifty years.”

Heinz-Otto Peitgen, mathematician

Virtually impossible to detect prior to the advent of the computer, the existence of fractals had been guessed at for centuries by scientists and academics with no means at their disposal to verify their suspicions. In the seventeenth century the German mathematician Gottfried Leibniz was the first to consider the likelihood of self-similarity, and in 1872 another German-born mathematician, Karl Weierstrass, produced his well-known Weierstrass function, a fractal-like graph with detail at every level. It was not until 1975, however, that Polish-born mathematician Benoit Mandelbrot (1924–2010), the father of fractal geometry, first used the word to describe what he called the Mandelbrot set—a collection of points with distinctive boundaries and having an obvious two-dimensional fractal appearance, a known shape but considered little more than a geometric curiosity by his predecessors. Mandelbrot coined the word “fractal” and published his ideas in 1975, which were later translated as Fractals: Form, Chance, and Dimension (1977).

Fractals have since appeared in algorithmic art—visual art using designs created from algorithms and made possible with the use of fractal-generating computer software. Not unlike photography, fractal art began as a subclass in the visual arts, but received a boost in its legitimacy when a fractal landscape was chosen for the August 1985 cover of Scientific American, which depicted a view of the Mandelbrot set in the form of a heavily inclined, mountainous plateau. BS

1975

Near-death Experience

Raymond Moody

A study of the sensations reported by people who have been on the threshold of death

A drawing by English psychologist Dr. Susan Blackmore of a tunnel with light at the end, as seen by those who have undergone near-death experiences.

Near death experiences (NDEs) encompass the broad range of sensations experienced by people who are either near death or believe themselves to be near death. While NDEs have been widely reported across many times and cultures, U.S. psychologist Raymond Moody’s (b. 1944) book Life After Life (1975) sparked much of the current academic and popular interest in the topic.

“If experiences of the type which I have discussed are real, they have very profound implications … we cannot fully understand this life until we catch a glimpse of what lies beyond it.”

Raymond Moody, Life After Life (1975)

In Life After Life, Moody examined more than one hundred case studies of people who had experienced “clinical death” and were subsequently revived. He argued that peoples’ reports of NDEs were surprisingly uniform, and that the structure of the experiences could be broken into a number of discrete “stages.” These stages included (1) feelings of serenity, (2) hearing noises, (3) becoming detached from one’s body, (4) traveling through a “tunnel” of light, and (5) experiencing a “review” of one’s life. Some people also reported meeting figures from various religious traditions, and many survivors of NDEs went on to experience significant changes in personality and outlook. Subsequent research into reports of NDEs has supported similar conclusions, though with some significant variations. For example, some people have reported NDEs that were dominated by feelings of fear and anxiety.

NDEs are sometimes taken to be evidence of an afterlife, and some survivors of NDEs have claimed to have seen or heard things that were in some way “miraculous.” However, researchers in a number of fields have also explored the physiological and neurological bases of NDEs, and have made significant progress. While current research has not identified a single cause of NDEs, factors such as apoxia (the absence of oxygen to the brain), altered temporal lobe functioning, and the release of endorphins or other neurotransmitters have all been speculated to play some role. BSh

1975

Handicap Principle

Amotz Zahavi

A theory to explain why the peacock’s tail is not contrary to Darwinian evolution

In 1975, in a paper titled “Mate Selection—A Selection for Handicap,” the Israeli biologist Amotz Zahavi (b. 1928) announced an ingenious idea that shed light on a problem that had perplexed the best minds in evolutionary biology for more than a hundred years. He called it the Handicap Principle.

On April 3, 1860, the year after publishing On the Origin of Species (1859), Charles Darwin wrote to his colleague Asa Gray, “The sight of a feather in the peacock’s tail, whenever I gaze at it, makes me sick.” He worried that the peacock’s tail did not fit with his theory of natural selection in which only the fittest creatures survive. How could a peacock’s tail confer fitness? If anything, it appeared to handicap its owner. Zahavi, however, argued that the peacock’s tail evolved precisely because it was a handicap, that a handicap might actually increase its owner’s fitness. Talking about the ornate display of a cock pheasant’s tail, he suggested that it displays to a female: “Look what a fine pheasant I must be, for I have survived in spite of this incapacitating burden behind me.” The male succeeds in persuading the female because its handicap acts as a true fitness test; only the fittest are strong enough to survive it.

“It is possible to consider the handicap as a kind of test imposed on the individual.”

Amotz Zahavi, “Mate Selection …” (1975)

The idea was originally disputed by many academics, including Richard Dawkins, who argued that “the logical conclusion to it should be the evolution of males with only one leg and one eye.” However, Dawkins and others were ultimately won over by a masterful piece of mathematical modeling by Alan Grafen in 1990, which revealed that natural selection could indeed favor males who evolve complex, costly displays or handicaps. JH

1975

No-miracles Argument

Hilary Putnam

A philosophical theory that justifies scientific leaps into the unknown

Scientists can predict outcomes—for example, Sir Isaac Newton, from observations of Saturn, correctly predicted the existence of a then undiscovered planet. However, by speculating at all, Newton was departing from strict scientific precepts: his methods could be validated only retrospectively, once Neptune had been sighted. His hypothesis was based, to some extent, on faith: the belief that it is possible accurately to extrapolate (infer the unknown from data we already have). If the belief that scientific theories are at least approximately true were to be mistaken, the only remaining explanation would be that some things are miraculous. Since miracles are by definition inexplicable, they are therefore, again by definition, unscientific: that is the nub of the no-miracles argument, which was first posited by Hilary Putnam (b. 1926) in his book Mathematics, Matter, and Method (1975).

Despite its slight suggestion of circularity—the supposition that miracles are unscientific does not necessarily mean that they do not occur; it may simply mean that the current definition and purview of science are flawed—the no-miracles argument provides a philosophical justification for continued scientific research into areas where the premises are based on likelihood rather than on incontrovertible fact. As a simple illustration, when a stick is held half in and half out of water, we may observe that it appears to be bent at the point where it breaks the surface: but do we know for certain that that is the case with all sticks in all water? We do not and cannot, but we can assume that it is. Without such assumptions, research—and hence scientific advances—would be rendered, if not impossible, even harder than it already is. The no-miracles argument is thus both pragmatic and realistic. GL

1975

Neuro-linguistic Programming

Richard Bandler and John Grinder

An approach to communication, personal development, and psychotherapy that argues that we can rid ourselves of inhibition through an effort of will

Paul McKenna at a neuro-linguistic programming training seminar in 2006. The method is said to make business people more successful, fat people thinner, and to rid others of phobias.

Neuro-linguistic programming (NLP) was developed in the 1970s by two Americans, Richard Bandler (b. 1950) and John Grinder (b. 1940), whose collaboration began with their book The Structure of Magic Vol. I (1975). Based in part on some of the theories of Noam Chomsky—particularly the idea that every sentence has two structures: one superficial, the other underlying—NLP claimed to be an innovative method of treating phobias, depression, habit disorders (especially those on the obsessive-compulsive spectrum), psychosomatic illnesses, and learning difficulties.

“As long as you believe it is impossible, you will actually never find out if it is possible or not.”

John Seymour, Introducing Neuro-Linguistic Programming: The New Psychology of Personal Excellence (1990)

The central tenet of NLP was that there is nothing anyone can do that we cannot not do ourselves. Thus inhibited, sick, and disabled people were encouraged by therapists to model their thoughts and words—and subsequently their actions—on those of people who were not so afflicted: failure would be turned into success simply through imitation of the successful. One of NLP’s main marketing slogans was “finding ways to help people have better, fuller, and richer lives”.

The theory of NLP was massively popular and soon came to be practiced by private therapists and hypnotherapists, as well as by management consultants worldwide. Bandler and Grinder quit their teaching jobs to devote their energies to NLP courses for students (who were charged $1,000 a head for a ten-day workshop) and then, even more lucratively, to bestselling books on the subject, such as Frogs into Princes (1979).

Meanwhile, skeptics cast doubt on the scientific basis of the concept. Gradually NLP came to be regarded as little more than a variant form of assertiveness training. However, although the credibility of certain aspects of NLP has been undermined, the practice has been highly influential and remains a major component of some business training courses. GL

1976

The Selfish Gene

Richard Dawkins

The theory that genes are what drive the evolution of species

All organisms have, at the cellular level, strands of genetic material known as genes. These genes contain the basic information that life needs to grow, metabolize, and reproduce. In The Selfish Gene, published in 1976 while he was a lecturer at Oxford University, British biologist Richard Dawkins (b. 1941) proposed that it was these genes that actually propelled evolution. The organism in which the genes are found will strive not only to reproduce its own genetic material, but also to protect genetic material similar to itself.

Dawkins himself said that the ideas he expressed about evolution were not his alone, and that they had existed for quite some time, viewed by experts in the field as fairly standard. What was different about his idea was the way that he explained it, framing evolution as a mechanism through which self-replicating material made copies of itself. As part of his explanation he also drew an analogy with cultural phenomena, which he dubbed memes, that replicated themselves from person to person, growing or shrinking in popularity.

“We are all survival machines for the same kind of replicator—molecules called DNA.”

Richard Dawkins, The Selfish Gene (1976)

The Selfish Gene caused some controversy after its publication. Though some readers confused the title as implying that genes feel or express desires, the notion that life evolves because of the unconscious, blind goal that genes have for self-replication cast the very notion of evolution in a different light. The idea offered a new perspective on certain behaviors, such as altruism and attempts at reproduction that result in death soon after copulation. That view that genes are the fundamental unit of natural selection has since become widely accepted. MT

1976

Meme

Richard Dawkins

An element that guides cultural evolution, as a gene guides genetic evolution

A hypothetical “bait” meme lures other memes before propagating itself into people’s minds.

Memes are ideas, habits, tunes, skills, stories, fashions, catchphrases: any kind of behavior that can be copied from person to person through imitation. When these words or actions are heard or seen, they are selectively retained and then copied, sometimes with variation, and so they spread between people, within a culture.

British biologist Richard Dawkins (b. 1941) coined the term in his well-known book The Selfish Gene (1976); its last chapter was titled “Memes: the new replicators.” Dawkins felt that “Darwinism is too big a theory to be confined to a narrow context of the gene,” and so he introduced the idea of cultural evolution. To describe the self-replicating unit of cultural transmission, he took the Greek word mimeme, meaning “to copy,” and shortened it to rhyme with the word “gene.”

“Cultural transmission is analogous to genetic transmission …”

Richard Dawkins, The Selfish Gene (1976)

The word and idea were taken up enthusiastically, spawning a new discipline of memetics; indeed, they demonstrated how a successful evolutionary meme might actually spread. Dawkins was interested in the possibility of the meme being developed into a proper hypothesis of the human mind, which happened most notably in the hands of the intellectual heavyweights Daniel Dennett in Consciousness Explained (1991) and Susan Blackmore in The Meme Machine (1999). Dennett argued how “human consciousness is itself a huge complex of memes.” Critics rebelled against the idea of selfish memes acting like mind-viruses, parasitizing their unwitting human hosts. But, as Dawkins said from the outset, our conscious foresight means “we have the power to defy the selfish genes of our birth and, if necessary, the selfish memes of our indoctrination.” JH

1976

Missing Letter Effect

Alice F. Healy

Research that shed new light on understanding the way that we read

In 1976 cognitive psychologist Alice F. Healy conducted experiments to determine the accuracy with which university students could identify every occurrence of certain letters in given passages of prose. Participants were asked to mark each target letter as they read silently at their normal speed. For example, in the sentence “Men who work very long hours pass too little time at home,” they were asked to mark every letter “e.” If they reached the end and found that they had missed one or more occurrences, they should not go back.

The results showed that recognition was much higher when the letters were sounded (for example, the “e” in “very”) than when they were silent (the “h” in “hours”). Participants were more likely to notice the letters in words that gave the sentences their meaning than in form words that were merely functional—more of them would mark the “t” in “time” than the “a” in “at.”

“ … frequent words … show more detection errors than infrequent words …”

A. F. Healy, “Detection Errors on the Word The” (1976)

Healy’s work laid the foundation for further studies. One later experiment tasked participants to look for certain letters after previously having read the passage with no idea of what they would then be required to do: in such cases, the hit rate increased significantly.

Among the findings of this research was that fluent readers do not generally read every letter of every word, but rather assume what the word is from its general shape and the context. However, “fluent” is not necessarily accurate: witness, for example, the common confusion of “causal” and “casual.” Study of the missing-letter effect has cast light on the workings of the human mind and enabled educationists better to help beginners and slow readers. GL

1977

RSA Algorithm

R. L. Rivest, A. Shamir, and L. M. Adleman

A revolutionary method for coding data and maintaining privacy on the Internet

In 1977, a paper came out of the Laboratory for Computer Science at the Massachusetts Institute of Technology titled, “A Method for Obtaining Digital Signatures and Public-Key Cryptosystems.” It had the presentience to forecast that “the era of ‘electronic mail’ may soon be upon us,” adding, “we must ensure that two important properties of the current ‘paper mail’ system are preserved: (a) messages are private, and (b) messages can be signed.” The proposed solution of the authors—Ronald Rivest (b. 1947), Adi Shamir (b. 1952), and Leonard Adleman (b. 1945)—was a type of cryptography that became known by the initials of their surnames: the RSA algorithm.

Two types of cryptosystem currently exist: secret-key (or symmetric) and public-key (or asymmetric). Secret-key relies on one key to code and decode the message. Public-key relies on two: a public (published) key that performs the encryption, and a private key (kept secret) used for decryption. Crucially, the sender of a message uses a different code to the recipient, so there is no need to transmit a code and risk interception. U.S. cryptographers Whitfield Diffie and Martin Hellman were the first to describe the concept of public-key cryptography in 1976 (although it now seems to have been invented a decade previously by the British Government Code Headquarters, but kept secret). However, Rivest, Shamir, and Adleman were the first to make it workable using prime numbers of about one hundred digits long.

Today the RSA system is the most popular form of public-key cryptography: it is found in most computer operating systems, is built into the main protocols for secure Internet communication, and is used by numerous institutions worldwide. In 2002, its inventors were given the Turing Award, perhaps the highest honor in computer science, for their ingenious idea. GL

1978

Critique of Orientalism

Edward W. Said

A critical reassessment of Western scholarship of the “Orient,” specifically in relation to the Arab Islamic world

Orientalism (1978) is Edward W. Said’s best-known work, and has been referred to as one of the most influential scholarly books of the twentieth century.

Palestinian-American historian Edward W. Said (1935–2003) did not coin the word “Orientalism,” but his choice of it as the title of his influential study in 1978 of relations between the Arab world and the West contributed greatly to subsequent understanding of the term.

“From the beginning of Western speculation about the Orient, the one thing the Orient could not do was to represent itself.”

Edward W. Said, Orientalism (1978)

According to Said, the West had consistently and chronically demeaned the Arab world in order to subjugate it. This view was received sympathetically by many who were anxious to re-evaluate history in the light of post-World War II political developments in the Middle East, notably the creation in 1948 of the state of Israel and the emergence of several Arab states as world powers because of their oil wealth.

Not everyone admired the work, however, and Said was criticized for a number of perceived flaws in his argument. One of the main objections was to his binary depiction of West and East as antagonistic monoliths, the former the oppressor, the latter the victim, whereas in truth there had been constant interplay between the two and exchanges of ideas and materials. Why, asked historian Bernard Lewis, if the British were so determined to suppress Egyptian national identity, as Said claimed, did they decipher the ancient language of the region? How, others asked, could a book titled Orientalism ignore as it did China and Japan? Some people thought that Said had ignored a complex symbiosis because it did not fit in with his preformed theories.

Orientalism may have been flawed (though Said responded trenchantly to these and many other criticisms) but it deeply influenced the development and content of postcolonial studies courses, not only as they relate to the Arabs but also to the whole world. Its impact has been felt around the globe, with editions in thirty-six different languages. GL

1979

Rap Music

United States

A style in which rhythmic and/or rhyming speech is chanted to musical backing

U.S. rap star Grandmaster Flash, pictured here deejaying in 1980.

Rap music is music with an insistent beat and accompanying lyrics, which may incorporate digital sampling (extracts from other recordings); it is alternatively known as hip-hop. The lyrics are spoken or chanted rather than sung. They may either rhyme in the conventional manner of verse or derive their rhythm from the beat. Many raps feature both types of lyric. The subject matter can range from traditional poetic themes, such as love and nature, to controversial “gangsta raps” that have been condemned for their misogyny and for glamorizing drugs and violence.

The use of repetitive drum beats to punctuate and dramatize spoken narratives is thought to have derived from West Africa, where it has been a common practice since precolonial times. The recitative—a style of singing that resembles (or is) speech—has a long tradition in the West, being used in opera and in musicals.

“ … though the element of poetry is very strong, so is the element of the drum …”

Archie Shepp, jazz musician

The use in English of the word “rap” as a verb or noun meaning “talk” dates from the sixteenth century, but its application to a form of music began among African Americans in the 1960s. One of the earliest exponents was Gil Scott-Heron (1949–2011), a jazz musician who became known as “the godfather of rap.” The style came to prominence in 1979 when “Rapper’s Delight” by the Sugarhill Gang, a New Jersey trio, became the first rap record to make the Top 40 singles chart. There followed a long line of rap stars, including Grandmaster Flash and the Furious Five, LL Cool J, Public Enemy, and the Beastie Boys. Later came a second wave ridden by, among others, P-Diddy, Snoop Dogg, Jay-Z, Eminem, and Kanye West. GL

1979

Social Networking Service

United States

A platform to build social networks or social relations among people

The earliest services that enabled people to connect with each other via computer were e-mail and chat programs, which appeared in the early 1970s. The first social networking service in the modern sense of the term, however, was USENET, which began in 1979 as a messaging system between Duke University and the University of North Carolina. It spread rapidly into other U.S. universities and government agencies.

As the World Wide Web expanded in the 1990s, commercial companies began to set up online social networks that could be used by members of the public. The first of these, such as Classmates.com and SixDegrees.com, were generally intended as a means of reconnecting with old school, college, and work friends. Over time the purpose of social networking services was expanded to include dating services (such as Friendster), maintaining business contacts (LinkedIn), and enabling rock bands to connect with their fans (MySpace). Facebook, today the world’s largest social network, was launched in 2004; it took the Classmates.com model and added improved profile pages, links, and other attractive features. It was joined online in 2006 by Twitter.

“Social media is not about the exploitation of technology but service to community.”

Simon Mainwaring, social media specialist

The benefits of social networking are numerous—it has made it easier for friends and family to stay in touch, enabled people to connect with others with similar interests, and has even proved a powerful tool for political activism, such as in the Arab Spring of 2011. However, many critics view social networking sites as simply an online form of popularity contest and important questions remain about users’ privacy. GL

1979

Sudoku

Howard Garns

A simple number game that became a worldwide phenomenon

The name Sudoku is an abbreviation of the Japanese phrase “Only single numbers allowed,” but the game itself, contrary to popular misconception, was not invented in Japan or by anyone of Japanese descent. Rather, it was created by a retired U.S. architect, Howard Garns (1905–89), in 1979.

Garns based his game on the “magic square” grid concepts of eighteenth-century Swiss mathematician Leonhard Euler, who himself had adapted them from early tenth-century Islamic medical journals. The original magic squares, attributed to the Persian-born chemist, astronomer, and physicist Jabir ibn Hayyan, were nine-celled squares featuring numbers one through to nine, with five in the middle and each row, column, and diagonal adding up to fifteen. It was known as the “buduh square,” and became so popular that it doubled as a talisman. Magic squares then began to grow in size, from 4x4 to 6x6 and even 10x10 cells, which began appearing in the thirteenth century.

“Scientists have identified Sudoku as a classic meme …”

David Smith, The Observer (May 15, 2005)

Grid-style number puzzles appeared for a time in late nineteenth-century French newspapers, but it was Garns’s Sudoku puzzle that was the first attempt at the modern game so familiar to us today. It was originally referred to as Number Place and was published in Dell Pencil Puzzles and Word Games in May 1979. In Japan it was first published in the puzzle book Monthly Nikolist by the Japanese company Nikoli in 1984, as Suuji Wa Dokushin Ni Kagiru, “the numbers must occur only once.” Two years later its grid was made symmetrical and fewer clues given, and when published in The Times of London in 2004, it became a phenomenon. BS

1979

Postmodernism

Jean-François Lyotard

A philosophy that challenged and deconstructed traditional beliefs

“Postmodernism” first emerged in the philosophical lexicon in 1979 in The Postmodern Condition, a work by the French philosopher and theorist Jean-François Lyotard (1924–98), and in no time cast a very wide and all-encompassing net: architecture, literature, the visual arts, religion, economics, science, ethics, and art. There was hardly a discipline or an idea that was not free of its influence, and its definitions are as broad as the areas of life it touches: it is the “drawing of fresh attention to established conventions,” the “reintroduction of classical elements to past styles,” and “any reaction to modernism that recalls traditional materials and forms, ideas and inquiries.” The consensus is that it likely grew from a suspicion that science and understanding cannot answer all of life’s questions, that reality is more than the sum of our understanding. Postmodernism is skeptical of generalized conclusions that apply to us all—it is the individual experience and how we interpret it for ourselves that matters, not the collective universal law seen by others and applied to the collective.

“Simplifying … I define postmodern as incredulity toward metanarratives.”

J.-F. Lyotard, The Postmodern Condition (1979)

The concept, however, is not without its detractors. Some have likened it to nothing more than a meaningless buzzword, used to describe everything from television commercials to the dread we feel at the prospect of nuclear war. And because of its inherent skepticism it denies too much and lacks the optimism that can come from scientific, philosophical, and religious truths.

But postmodernism may not be the successor to modernism it claims to be. Modernists queried religion, science, and concepts such as nationhood, too. So is postmodernism a new dynasty, or merely an heir? BS

1979

Principles of Biomedical Ethics

Tom L. Beauchamp and James F. Childress

The development of a standard approach to biomedical ethics, which resolved ethical issues in terms of four ethical principles

There are numerous ethical issues surrounding the medical use of human embryonic stem cells, which have the potential to develop into a variety of different cell types in the body.

Principles of Biomedical Ethics (PBE, 1979) is a book by philosophers Tom L. Beauchamp and James F. Childress (b. 1940). It was one of the first books to present a detailed, systematic treatment of ethical decision making in health care and biomedical research, and it laid the groundwork for contemporary research and teaching in this area.

“We understand ‘biomedical ethics’ … as the application of general ethical theories, principles, and rules to problems of therapeutic practice, health care delivery, and medical and biological research.”

Tom L. Beauchamp and James F. Childress, Principles of Biomedical Ethics (1979)

Beauchamp and Childress began PBE by contrasting the utilitarian theory that an action’s moral rightness is determined by its consequences with the deontological theory that there are “right-making” features of actions that have nothing to do with consequences. While the authors disagreed as to which theory is correct, they argued that both theories support a set of four “core” principles relevant to ethical decision making in medicine. These principles included duties to promote patients’ wellbeing (beneficence), to avoid causing them harm (nonmalfeasance), to respect their decisions (autonomy), and to distribute health care goods and services in an equitable manner (justice). In the remainder of PBE, they applied these four principles to a variety of ethical problems that arise in medicine, such as the right of patients to refuse treatment, the problem of determining what counts as informed consent, and the appropriate way of conceptualizing professional-patient relationship.

Beauchamp and Childress have issued updated editions of PBE every few years, and it has been widely adopted in introductory courses on biomedical ethics at both the undergraduate and graduate level. More broadly, the book has played an important role in raising the academic profile of so-called “applied ethics,” in which ethical theories are applied to particular issues that arise in discipline-specific contexts (“business ethics,” “research ethics,” “environmental ethics,” and so on). BSh

1979

The Expanding Circle

Peter Singer

The notion that morality should be based in utilitarianism

The expanding circle is a philosophical idea that the greatest good for the greatest number is the only measure of good or ethical behavior. It advocates that the “circle” of beings with rights and privileges should be expanded from humans to include many animal species that have “interests.”

The Australian moral philosopher Peter Singer (b. 1946) used the phrase the “expanding circle of moral worth” in his book Practical Ethics (1979), which analyzes how ethics can be applied to difficult social questions, such as the use of animals for food and research, and the obligation of the wealthy to help the poor. He attempted to illustrate how contemporary controversies have philosophical roots and presented his own ethical theory to apply to practical cases. Many of Singer’s ideas, such as his support for abortion, were controversial. In an attack on speciesism, he argued that differences between the species were morally irrelevant, and speciesism is akin to racism. Singer advocated redistributing wealth to alleviate poverty. He suggested that infanticide up to a month after birth is humane in the case in which a child may have an extreme disability. Singer argued that nonvoluntary euthanasia is neither good nor bad when the individual killed is in a coma.

Practical Ethics caused outrage in some quarters, leading Singer to make additions to later editions, outlining what he felt were misunderstandings that caused an adverse reaction to his ideas. He expanded on his theory further in his book The Expanding Circle: Ethics and Sociobiology (1981). Singer’s views on the treatment of animals have been espoused by animal rights activists, but his idea of the expanding circle remains contentious among philosophers, advocates for the disabled, and prolife supporters, and some assert that his stance on various issues can lead to eugenics. CK

1980

Originalism

Paul Brest

A new reading of the U.S. Constitution as its authors (may have) intended

Originalism is any attempt to interpret and apply the U.S. Constitution as it would have been construed by contemporary eighteenth-century readers at the time of its inception in 1787. The word was coined in 1980 by U.S. academic Paul Brest (b. 1940) in an essay titled “The Misconceived Quest for Original Understanding.”

Originalists may be divided into two main groups: those who want to keep faith with the intentions of those who drafted and ratified the Constitution; and those who subscribe to the original meaning theory, which requires that the Constitution be interpreted in the same way as reasonable persons living at the time of its creation would have interpreted it. At the root of all originalism is formalism—the belief that the judiciary is empowered only to uphold existing laws: creating, amending, and repealing them are the responsibilities of the legislature.

“What in the world is a moderate interpretation of a constitutional text?”

Antonin Scalia, Supreme Court Justice

Originalists are generally regarded as politically conservative. One of the most prominent is Supreme Court Justice Antonin Scalia (b. 1936), who claimed that the Eighth Amendment (prohibiting cruel and unusual punishments) must be read, understood, and interpreted today according to what would have been regarded as cruel and unusual punishment in the 1790s. However, the theory has also been espoused by liberals such as Supreme Court Justice Hugo Black (1886–1971) and Yale lawyer Akhil Amar (b. 1958). Both groups represent a challenge to “Living Constitutionalists”—those who believe that the U.S. Constitution is open to interpretation in every succeeding age, and that it is flexible and dynamic, not rigid and moribund. GL

1980

Naming and Necessity

Saul Kripke

A seminal work in the philosophy of language

Naming and Necessity (1980) is a book by U.S. philosopher and logician Saul Kripke (b. 1940), based on a series of lectures he gave at Princeton University in 1970. It is often considered one of the most important philosophical works of the twentieth century.

In Naming and Necessity, Kripke offers sustained arguments against several views about language and modality that were widely accepted at the time. The first view was descriptivism, which held that proper names referred to their targets in virtue of the descriptions speakers associate with them. The second view was the thesis of contingent identity, which held that the sorts of identities discovered by empirical investigation (such as “Mark Twain is Samuel Clemens”) are not necessary, and are thus false in some “possible world.” In contrast to these views, Kripke argued that proper names were rigid designators that picked out the same target in every possible world and that identity claims involving these names were thus necessary. Kripke then used his account to argue against materialism, which held that mental events, such as a person’s being in pain, were identical to physical events, such as the firing of neurons in that person’s brain.

“I will argue … that proper names are rigid designators …”

Saul Kripke, Naming and Necessity (1980)

Kripke’s thesis led some philosophers to abandon descriptivism and instead adopt a causal theory of reference, by which the meaning of a term is fixed by its causal history. His suggestion that necessary truths could be discovered by empirical investigation also renewed interest in philosophical metaphysics, as some felt that his arguments showed the importance of examining the natures of objects in the physical world. BSh

1980

The Chinese Room Argument

John Searle

An argument against the possibility of true artificial intelligence

Although a computer can play chess, that does not mean it has a mind that understands the game.

In the mid-twentieth century, a theory of consciousness known as functionalism developed. It explained the human mind in computational terms, rejecting the notion that there is something unique to human consciousness or the human brain. If a computer can pass a Turing Test by convincing an expert that it has a mind, then it actually does have a mind. In 1980, philosopher John Searle (b. 1932) responded to functionalism with his revolutionary article, “Minds, Brains, and Programs,” in which he critiqued the possibility of artificial intelligence with his Chinese room argument.

Imagine that a man who does not speak Chinese is put in a windowless room with nothing but a Chinese dictionary that contains the appropriate responses for certain queries. A slip of paper is passed under the door with a Chinese phrase on it. The man looks up the appropriate response, writes it on the paper, and passes it under the door. Since he speaks no Chinese, he has no idea what he just said; he is only following the stimulus-response pattern indicated by the dictionary. To an observer outside the room, he seems to be a Chinese speaker, even though the phrases he reads and writes mean nothing to him. Similarly, a computer might pass a Turing Test, but it does so only by blindly following its programming. Though it has syntax (logical structure), it has no semantics (understanding of meaning). Thus, though computers may emulate human minds, they can never actually have minds.

“ … the programmed computer understands … exactly nothing.”

John Searle, “Minds, Brains, and Programs” (1980)

Searle’s article ignited a firestorm of debate in the fields of philosophy and artificial intelligence. His views are vital to dualistic explanations of consciousness. JM

1980

Neocatastrophism

Luis Alvarez and Walter Alvarez

A theory arguing that asteroid impacts cause mass extinctions

An artwork of an asteroid striking the Earth, similar to the one responsible for wiping out the dinosaurs.

Neocatastrophism is the new face of the discredited idea of catastrophism. It claims that our planet and its inhabitants have been shaped by cataclysmic events. The theory regained favor on June 6, 1980, with a publication in the journal Science. The paper, by the Nobel Prize winner Luis Alvarez (1911–88) and his son Walter (b. 1940), blamed the death of the dinosaurs, indeed the whole of the Cretaceous-Tertiary mass extinction, on an asteroid crashing into Earth.

The prevailing wisdom at the time was gradualism: a theory originated by the Victorian geologist Charles Lyell (1797–1875), which prescribed steady, creeping change. Charles Darwin (1809–82) seconded Lyell, and catastrophists were derided as standing against evolution. The Alvarezs’ paper, however, was convincing. They had discovered that the Cretaceous-Tertiary boundary contained a clay layer with unusually high levels of an element rare in the Earth’s crust, iridium. The isotopic ratio of iridium matched those found in asteroids. Moreover, the presence of shocked quartz granules, tektites, and glass spherules suggested an impact. The Alvarezs calculated that to account for the data the asteroid would have been about 6 miles (10 km) in diameter: large enough to kick enough dust into the sky to trigger a nuclear winter.

“ … the Cretaceous world is gone forever, and its ending was sudden and horrible.”

Walter Alvarez, T. rex and the Crater of Doom (1997)

Ten years after the paper’s publication, a suitable impact crater was discovered near Chicxulub, Mexico. The Alvarezs and their team precipitated a paradigm shift. Evolution is now thought to be able to occur in leaps and bounds, and neocatastrophism and asteroids are regarded as plausible agents for change. JH

1981

Biblical Narrative Criticism

Robert Alter

A modern form of biblical criticism based in contemporary literary theory and practice

Biblical narrative criticism examines the scriptures from a literary perspective. It seeks to discover the role that literary art plays in shaping biblical narrative by painstaking analysis of the text, taking into account devices such as parallelism, details recorded, tempo, word order, sound, rhythm, syntax, repetition, characterization, and dialogue. Narrative criticism examines the Bible as a cohesive work of literature, rather than a collection of disparate documents, in order to find the interconnections in the text that highlight motifs and themes. It explores questions regarding authorial intent, the reliability of the narrator, and the ethical implications of multiple interpretations.

Narrative criticism began in 1981 with U.S. academic Robert Alter’s (b. 1935) groundbreaking study of the Hebrew Bible, The Art of Biblical Narrative. Alter attempted to cast new light on the text by illuminating the subtlety of the narrative and its lexical nuances, and examining the Hebrew Bible as a document of religious history. He illustrated how the authors of the Hebrew Bible use wordplay, symmetry, and suchlike to create tension in a story, and so sustain the reader’s interest.

“[The Bible] offers such splendid illustrations of the primary possibilities of narrative.”

Robert Alter, The Art of Biblical Narrative (1981)

Alter’s radical study suggesting that the Bible is a work of literary art that merits studied criticism proved highly influential when it was published. As the 1980s progressed, narrative criticism became more popular via various publications, including Shimon Bar-Efrat’s (b. 1929) Narrative Art in the Bible (1989). Narrative critics have since expanded the discipline to include the examination of issues such as plot development and the role of the omniscient narrator. CK

1981

Reaganomics

Ronald Reagan

The economic policies promoted by U.S. President Ronald Reagan during the 1980s

Ronald Reagan pictured preparing for his inaugural speech, following re-election for a second term in 1985. His election victory saw him win forty-nine out of fifty states.

“Reaganomics” is a term used to describe the economic policies of U.S. President Ronald Reagan (1911–2004), who held office from 1981 to 1989. The term was popularized by radio host Paul Harvey and was widely adopted by the news media and academics to discuss Reagan’s policies, both during and after his presidency.

“In this present crisis, government is not the solution to our problem; government is the problem.”

Ronald Reagan, Presidential Inaugural Address (1981)

Throughout the 1970s, the United States had experienced high rates of both inflation and unemployment, and government deficits had been increasing. Reagan argued that these economic problems could be solved by cutting marginal tax rates and reducing regulation and nondefense government spending. He also endorsed the disinflationary monetary policy of Federal Reserve Chairman Paul Volker (b. 1927). Reagan’s economic proposals were strongly influenced by the so-called “supply side” school of economics and the Laffer curve, which held that total government tax revenue might actually be increased if tax rates decreased. The thought was that a cut in tax rates might “pay for itself” if the resulting economic growth allowed for the government to collect the same amount in tax revenue.

Reagan implemented many of his proposals during his administration, most notably the one to sharply reduce marginal tax rates. Proponents of Reaganomics argue that these policies helped to lower unemployment, reduce the rate of inflation, and increase gross domestic product per capita and average productivity. Critics, meanwhile, argue that the effects of Reaganomics were largely negative. Among other things, they point to significant increases in government debt, consumer debt, the trade deficit, and income inequality. These debates have continued to the present day, with many conservatives (both in and outside of the United States) defending economic policies loosely based on Reaganomics, and many liberals arguing that these sorts of policies are likely to be ineffective. BSh

1981

The Theory of Communicative Action

Jürgen Habermas

An argument that the key to emancipation is to be found in communication

Jürgen Habermas speaks at the Jewish Museum in Berlin, Germany, during a ceremony to award the prize for “Understanding and Tolerance” on November 13, 2010.

The Theory of Communicative Action is a two-volume work published in 1981 by German philosopher and sociologist Jürgen Habermas (b. 1929). The book’s account of social theory has significantly influenced research in sociology and philosophy in both continental Europe and in English-speaking countries.

“Communicative action refers to the interaction of … actors [who] seek to reach an understanding about the action situation and their plans of action in order to coordinate their actions by way of agreement”

Jürgen Habermas, The Theory of Communicative Action (1981)

Habermas was a student of critical theorist Theodor Adorno (1903–69), and was influenced by Adorno’s method of social criticism that blended aspects of Marxist historical materialism with psychoanalysis, phenomenology, and existentialism. In The Theory of Communicative Action, Habermas expanded this method of criticism even further to encompass research in sociology, linguistics, the philosophy of language, and the philosophy of science. He then used this methodology to provide a general account of rationality and social interaction, tied tightly to the ideal of “communicative action,” which occurs when a group of actors enter a discussion with the purpose of reaching a jointly acceptable consensus. Habermas contrasted this with “strategic action,” which occurs when actors treat each other as mere means to achieve individual ends. Habermas then discussed the ways that various types of communicative scenarios can be seen as approximating the ideal of communicative action.

Both The Theory of Communicative Action’s methodology and its conclusions have had significant scholarly and cultural impact. Many philosophers and sociologists have emulated Habermas’s pluralistic methodology, which draws broadly from both the continental and analytic philosophical traditions, as well as from contemporary research in the social sciences. His emphasis on the importance of meaningful public discourse, and on the social structures that are needed to enable this, have influenced both scholars and policy makers. BSh

1982

Holy Blood, Holy Grail

M. Baigent, R. Leigh, and H. Lincoln

A theory that Jesus Christ married Mary Magdalene and fathered a lineage of kings

The coauthors of The Holy Blood and the Holy Grail, Henry Lincoln, Richard Leigh, and Michael Baigent.

The “Holy Blood, Holy Grail” hypothesis is a pseudo-historical conspiracy theory that claims that Jesus Christ survived his crucifixion and took Mary Magdalene as his wife, fathering one or more children with her. His bloodline later became the Merovingian dynasty of kings. Proponents of the theory believe that modern Christianity is based on a false account of the life of Christ and that a secret society called the Priory of Sion, which has existed for hundreds of years and included many of Europe’s leading intellectuals, statesmen, and artists, will eventually restore the descendants of the Merovingians to power.

The groundwork for the theory began with a hoax perpetrated by Pierre de Plantard (1920–2000), a French draftsman and mystificator, in the 1950s. Plantard forged an elaborate hoax around a “historical” text called the Dossiers Secrets d’Henri Lobineau (Secret Files of Henri Lobineau), detailing the existence of the Priory of Sion, a shadow organization dedicated to installing the “Great Monarch” predicted by sixteenth-century French astrologer Nostradamus. In the 1980s, Henry Lincoln (b. 1930), an English television presenter, unaware that the Dossiers Secrets was a hoax, used it as the basis for a series of documentaries and a book, The Holy Blood and the Holy Grail (1982), coauthored with Michael Baigent (b. 1948) and Richard Leigh (1943–2007).

Elements of the Holy Blood, Holy Grail theory, especially the Priory of Sion, have enjoyed widespread popularity and become staples of both ongoing conspiracy theories, and conspiracy fiction—notably in Dan Brown’s mystery detective novel The Da Vinci Code (2003). The theory has helped to popularize belief in “alternate history,” the view that the modern world (of which Christianity is a major component) is based upon a falsehood that will eventually come to light—possibly with dire consequences for the current and supposedly false and oppressive world order. BSh

1982

Rational Drug Design

Arvid Carlsson

Using information about the structure of a drug receptor to identify or create drugs

A computer artwork depicts the action of selective serotonin reuptake inhibitors at a chemical synapse.

Most new drugs are, crudely, the result of trial and error, the result of testing a new chemical combination on cultured cells or lab rats and then matching the effects to treat a susceptible medical problem. But what if the process could be reversed? What if there could be rational drug design, starting by identifying the problem and then designing a drug to modulate or alter it therapeutically? Such was the thought that led to the development of selective serotonin reuptake inhibitors, some of the first drugs to be developed by rational drug design and now the most widely prescribed antidepressants in the world.

While employed by the Swedish pharmaceutical company Astra AB, the Swedish scientist Arvid Carlsson (b. 1923) worked on brompheniramine, an over the counter antihistamine drug used to treat symptoms of the common cold. The drug also had antidepressant properties, inhibiting the reuptake or reabsorption by cells of the neurotransmitter serotonin after it has performed its function of transmitting an impulse in the nervous system. Serotonin is thought to contribute to feelings of happiness and wellbeing, so by preventing it being reabsorbed, its effects are increased. Carlsson and his colleagues used their knowledge of how brompheniramine works to create from it a new rational drug, zimelidine, designed to work as a selective serotonin reuptake inhibitor. The drug went on sale to the public in 1982 but was later withdrawn and banned after serious side effects were observed.

Despite this setback, several more rational, antidepressant drugs were developed, most notably Prozac. Today, rational drug design is aided by powerful computer programs that can search through databases of chemical compounds and then select those compounds that are most likely to achieve the desired effect. SA

1983

Cell Phone

Motorola Corporation

A device that in time would transform the daily existence of billions of people

Actor Don Johnson as detective James “Sonny” Crockett uses a cell phone in a Miami Vice episode in 1984.

The idea of being able to make telephone calls while on the move first found fruition, in a limited sense, with the “walkie-talkie” two-way radio transceiver developed by the U.S. Motorola company during World War II (1939–45). Then, on June 17, 1946, a truck driver in St. Louis, Missouri, reached inside the cab of his truck for a handset, made a phone call, and ushered in the era of the wireless telephone. For more than a decade, technicians at AT&T and Bell Laboratories had been developing a wireless telephone service that would make and receive telephone calls from automobiles. The phone was predictably primitive. Only a few calls could be made at any one time, and there was only a single transmitter handling the calls of an entire urban region. Not only that, but each transmitting unit (handset) weighed around 80 pounds (36 kg) and thus could hardly be considered truly mobile.

“Marty called me … and said: ‘We’ve got to build a portable cell phone.’”

Rudy Krolopp, product designer

In the early 1970s, Marty Cooper, R&D chief at Motorola, gave product designer Rudy Krolopp the job of developing the world’s first cellular telephone. Krolopp took his commission to his team—eight people in all—and said that if anybody present thought it could not be done, they should say so: nobody did. Within weeks they had developed a working model, but it would be another ten years and a hundred million dollars (mostly for building the necessary infrastructure) before Motorola was ready to launch its product. In 1983, the first cell phone to be offered commercially, the Motorola DynaTAC 8000X, or “The Brick” as it came to be affectionately called, initiated a revolution in telecommunications. BS

1983

Multiple Intelligences

Howard Gardner

A theory that human intelligence is composed of several different capacities

The concept of multiple intelligences was first proposed by U.S. psychologist Howard Gardner (b. 1943) in his book Frames of Mind: The Theory of Multiple Intelligences (1983). Gardner argued that intelligence should be understood as a general capacity to solve particular sorts of problems or to produce services or goods valued by a person’s peers. He suggested that there are multiple, distinct psychological capacities that count as intelligences, including the mastery of language, musical ability, logical or mathematical ability, and the ability to interact with others. Gardner originally proposed that there were seven intelligences, but later suggested that there may be more. He argued that each type of intelligence could be distinguished from the others by factors such as engaging different areas of the brain and having different patterns of skill development. Importantly, each type of intelligence was supposed to be relatively independent from the others, and Gardner argued that people’s performance on the tasks associated with one type of intelligence was only weakly relevant to predicting how they would do on tasks associated with another intelligence.

“We all have these intelligences, that’s what makes us human beings.”

Howard Gardner, Multiple Intelligences (2011 edition)

While Gardner’s work has been influential, especially among educators, it has faced a number of criticisms. One concern has been the theory’s vagueness and lack of testability. Another concern has been the purported lack of evidential support for the theory, since some studies have found that the sorts of capacities Gardner talks about are in fact highly correlated, and are not independent in the way that the theory of multiple intelligences would seem to predict. BSh

c. 1985

Downsizing

United States

Making a company or organization smaller by eliminating staff positions

The Ford Motor Company downsized both its cars and its workforce during the 1980s.

To downsize is to make staff redundant from a job specifically for business reasons, such as improving efficiency or effectiveness. The term seems to have originated in the automotive industry in the 1970s, when cars were downsized by being built smaller and lighter to comply with new fuel economy standards. The concept of downsizing did not come to prominence until the 1980s, however, when U.S. companies sought a means to combat the combined effects of bloated managerial structures, recession, increased international competition, and new technology.

While the term was in vogue during the late 1980s and early 1990s, downsizing was regarded as ubiquitous. In fact, however, it was not as prevalent as it seemed: although it was common in the manufacturing sector—then in the throes of a decades-long decline—it was not common in the retail and service sectors, which in fact were generally upsizing. Nor was downsizing as beneficial as it seemed: manufacturing firms that downsized reduced their labor costs per employee but also found their productivity decline; they increased their profitability but found their stock values declined. Interestingly, in the mid-1990s, a comparison of downsizing announcements with employment data revealed that a slim majority of announced downsizers were actual upsizers, restructuring their operations under the guise of downsizing.

“Personnel cutbacks have taken a heavy toll on employee loyalty …”

Steven Prokesch, The New York Times (January 25, 1987)

Two of the main causes of downsizing are technological change and foreign competition. Since both are likely to continue, the practice of downsizing is likely to carry on, too. GB

1985

Six Thinking Hats

Edward de Bono

Interchangeable approaches to getting good results in brainstorming sessions

In his book Six Thinking Hats (1985), author Edward de Bono (b. 1933) describes an innovative method of organizing ideas in order to reach the best conclusion in the fastest time. The approach may be used by individuals working alone but it is best suited to group discussions in which disparate and sometimes conflicting ideas may never be fully synthesized into plans of action: so-called “spaghetti thinking.” Proceedings are chaired by a moderator or facilitator.

“[Having an] adequate way … does not mean there cannot be a better way.”

Edward de Bono, Six Thinking Hats (1985)

Users of the de Bono scheme take turns with the headwear (which may be literal or metaphorical). Each hat is a different color to indicate the type of thinking that they should concentrate on while wearing it. The white hat represents information: what do we actually know about the matter under discussion? The red hat stands for emotion: what are our gut reactions to the matter in hand? While wearing the black hat, participants should focus on reasons for caution. The yellow hat represents optimistic responses: wearers should come up with best-case scenarios. Wearers of the green hat are encouraged to invent completely different approaches to the subject, to probe and to provoke. The blue hat, with which every sequence should begin and end, stands for what De Bono terms “meta thinking”: at the start, blue hat wearers predict where the discussion might go; at the end of the session, they review where it has actually been and propose a plan of action. The Six Thinking Hats approach has been adopted by numerous companies, including Speedo, which used it while developing its swimsuit range. GL

1986

Six Sigma

Motorola Corporation

A methodology for streamlining manufacturing and business processes

In 1986, U.S. telecommunications company Motorola Corporation established Six Sigma as part of the curriculum of its in-house Motorola Education and Training Center. Six Sigma was an employee enrichment program teaching a wide variety of subjects intended to improve employee productivity. Bill Smith, an engineer and quality control expert, joined Motorola in 1986 and worked with chief operating office John F. Mitchell on implementing Six Sigma. The system gained popularity after its enthusiastic adoption by General Electric’s then chief executive officer, Jack Welch (b. 1935).

Six Sigma is a system of quality control in business. It seeks to standardize practices in manufacturing and other business processes with the goal of eliminating wasteful defects, such as nonconformity of a product or service to its original specifications. The primary assumption of Six Sigma is that manufacturing and business processes have features that can be measured, analyzed, improved, and controlled. The system is notable for its reliance on obscure nomenclature and its adherence to a rigid hierarchy, in which employees earn titles such as champion, master black belt, black belt, and green belt. By 2006, fifty-eight large companies had announced Six Sigma programs.

“The worst experience I can imagine is being a casualty of process variation.”

Six Sigma, “Variation—The Root of All Process Evil”

Business reaction to Six Sigma has been mixed; Forbes magazine pointed out that of the fifty-eight companies that had implemented Six Sigma programs by 2006, 91 percent regularly underperformed Standard & Poor’s 500 (a stock market index based on the market capitalizations of 500 leading companies). APT

1986

GM Food

United States

Altering the genetic makeup of plants and animals to increase their potential as foods

A researcher at the International Rice Research Institute in the Philippines works on golden rice.

Genetically modified (GM) food is any edible product made from organisms whose DNA has been altered by scientists. After almost half a century of experimentation, in 1986 the U.S. Department of Agriculture approved genetic modification; since then, hundreds of patents have been awarded for GM plants.

The aim of genetic modification is to enhance organisms in commercial terms by reducing their flaws—for example, by removing material that may slow their rate of growth, or by introducing new material that will make them easier and cheaper to exploit. It can also be used to make food more nutritious—golden rice, for example, is normal rice that has been genetically modified to provide vitamin A to counter blindness and other diseases in children in the developing world. At the time of writing, all GM foods are crop-derived, but work on GM salmon is also near completion.

“You are more likely to get hit by an asteroid than to get hurt by GM food.”

Mark Lynas, writer and climate change specialist

GM food is highly controversial: opponents warn that interference with nature may reduce plants’ resistance to pests and diseases or increase their production of toxins. There is also widespread concern that the long-term effects of GM on people and ecosystems are imperfectly understood.

Nevertheless, in 1995 the U.S. government approved GM foods for human consumption, and by the millennium nearly half of all the corn, cotton, and soybeans planted in the United States were GM. At the end of 2010, GM crops covered more than one-tenth of the world’s farmland, and their extent continues to increase; GM food is now on sale almost everywhere, and indeed is helping to feed the world. GL

1987

Kyōsei

Ryuzaburo Kaku

Businesses should practice symbiosis to promote mutual benefit and welfare

Kyōsei is a paradigm for business ethics that was implemented by Ryuzaburo Kaku (1926–2001), chairman of Canon Inc., in 1987. Kaku was a driving force behind the Caux Round Table, an international organization of business executives that first met in 1986 to promote ethical business practices throughout the world.

Kyōsei means “symbiosis” in the biological sense of species living together in mutually beneficial relationships. Kaku believed that businesses should harmonize profit and ethical principles and help to make the world a better place. He argued that business depends on a culture of responsibility and trust that can only be promoted by a commitment to ethical principles. Kyōsei is one of the cornerstones of stakeholder theory, a business philosophy that considers corporations to have moral obligations not only to their shareholders, but also to everyone with a stake in the company, such as employees, communities, customers, financiers, and suppliers. A business should strive to exist symbiotically with all of these stakeholders in order to promote the mutual welfare of all. Kaku believed that kyōsei also could be used to combat global imbalances in areas such as trade relations, employment, and the environment.

“Kyōsei is living and working together for the common good of mankind.”

Ryuzaburo Kaku

Kaku’s leadership at Canon illustrated that a corporation could become more profitable by conducting business ethically to promote the long-term good of the global community. Kyōsei has been adapted as a core principle of environmental ethics, stressing that humans can thrive only if they learn to live symbiotically with the environment that sustains them. JM

1987

Flynn Effect

James R. Flynn

The steady rise over time of standardized intelligence test scores

According to intelligent quotient (IQ) test scores, people are getting smarter: a phenomenon known as the Flynn effect. Named after James R. Flynn (b. 1934), the political studies professor who first observed it, the Flynn effect shows that test scores have risen dramatically in several nations around the world. Though IQ tests are regularly updated with new questions so that the average score across the test-taking population is 100, people given questions from prior tests tend to score higher than those who originally answered them.

Flynn published his findings in 1987 in a paper titled “Massive IQ Gains in 14 Nations: What IQ Tests Really Measure.” The term the “Flynn effect” was later popularized by authors Charles Murray and Richard Hernstein in their book The Bell Curve (1994). Flynn’s data showed an average rise in IQ of about 0.3 points per year. When measured against scores from tests from the 1930s, the Flynn effect implies that either people today are much smarter than the dullards of the past, or there is some other factor at work, such as IQ tests failing to measure intelligence accurately.

Загрузка...