Subsequent product placement has been associated more with movies and television than with literature. In 1896, French movie makers the Lumière brothers and Swiss businessman François-Henri Lavanchy-Clarke entered into an arrangement to promote a product, Sunlight Soap, in the Lumière’s new short film, Parade of the Eighth Battalion; in the movie, the logo of the product is seen briefly on a wheelbarrow. Hershey’s chocolate bars are mentioned in William A. Wellman’s Wings (1927), and a young boy desperate to become an explorer is seen reading a copy of National Geographic in Frank Capra’s It’s a Wonderful Life (1946). Product placements will always be with us, whether it be subtle—Steve McQueen pursuing bad buys through San Francisco in his Ford Mustang GT in Bullitt (1968)—or shamelessly garish—the Compaq computers used in Alien vs. Predator (2004).

Some products need only be hinted at to affect sales. In The Italian Job (2003), a bottle of champagne was passed around a group of thieves celebrating a recent heist. No brand was mentioned nor label seen, but the bottle’s shape was so distinctive that the target audience immediately recognized it. Product placement has been criticized for blurring the distinction between art and advertising, but its existence should not surprise us. After all, television was never only about entertainment—its purpose has always been to sell stuff. JMa

1874

Impressionism in Art

France

Art designed to give an impression of its subject, not a realistic representation

Claude Monet’s Impression: Sunrise (1872), the painting that named a movement across the arts.

The artistic movement dubbed “Impressionism” was formally launched in 1874 by a group of thirty Parisian artists led by Claude Monet (1840–1926), Edgar Degas (1834–1917), and Camille Pissarro (1830–1903), at a private exhibition mounted by the Cooperative and Anonymous Society of Painters, Sculptors, and Engravers. Other prominent artists, such as Paul Cézanne, Pierre-Auguste Renoir, and Berthe Morisot, joined the group later. The movement represented a reaction against the realist style of painting that characterized French painting and sculpture at the time. It was also a deliberate challenge to the Académie des Beaux-Arts, which dictated the artistic standards acceptable in France, including the subjects that artists should paint. The Academie refused to exhibit the work of artists who did not conform to its strictures.

The term “impressionism” was coined by the journalist Louis Leroy, writing in the satirical magazine Le Charivari, after seeing Monet’s painting Impression, Soleil Levant (Impression: Sunrise) at the exhibition in 1874. Initially the word was used by critics to describe the movement derogatively, but the name was accepted by both proponents and denigrators of the style. Actually, Impressionism covers a wide range of styles, some of them more representative than others.

The Impressionists held seven more shows, the last of which was in 1886. By that time the group had begun to dissolve, as each painter focused on his own work. But despite the fleeting nature of the movement’s existence, its impact would be long-lived. Before the Impressionist revolution, the visual arts were in a straitjacket of technique and subject matter. Impressionism relied on direct impressions on the senses, without the distraction of faithful representation, for its impact upon those who viewed it. It encouraged artists to extend the range of acceptable subjects, and changed the way that people thought about art. JF

1874

Set Theory

Georg Cantor

A theory that mathematical numbers or functions may be arranged into sets without considering their individual properties

Georg Cantor, photographed in c. 1910; wrongheaded criticism of his work caused him much depression in later life, but his set theory was to originate a new branch of mathematics.

German mathematician Georg Cantor (1845–1918) is credited with the invention of set theory in his paper “On a Characteristic Property of All Real Algebraic Numbers” (1874). In the following years he carried out further work on the theory and its application to other aspects of mathematics, such as infinity and the correspondence between groups of numbers, or equivalence.

“A set is a many that allows itself to be thought of as one.”

Georg Cantor, quoted in Infinity and the Mind by Rudy Rucker (1983)

Set theory is the study of sets, and of the relation that connects the elements of a set to the set itself. The most revolutionary aspect of Cantor’s theory was that it considered orders of different infinities, and particularly those that he termed “transfinities”; at the time, all infinities were seen as the same size. Cantor also included both real and irrational numbers in the sets that he proposed, although influential mathematicians, such as Leopold Kronecker (1823–91), did not believe in the existence of irrational numbers. Set theory became the subject of immediate and long-lasting controversy, which hampered Cantor’s career and put a strain on his mental health. A number of paradoxes that appeared to undermine the theory also caused many to doubt the validity of his ideas. Cantor suffered a breakdown in 1884 when his appointment to the University of Berlin was opposed, but by 1885 he had recovered and he continued to publish on set theory, overseeing its gradual acceptance into mainstream mathematics by the beginning of the twentieth century.

Set theory completely revolutionized the way that mathematicians think about numbers and functions and their relationships, and underpins much of modern mathematics. It also has some useful applications in real life, such as the simple Venn diagram and the way in which we categorize disparate human beings according to the properties that define them as part of the group—for example, New York Giants supporters or members of the Royal Society. JF

1874

Property Dualism

Thomas Huxley

The theory that substances can have two kinds of property: physical and mental

Property dualism is a philosophical compromise between substance dualism and materialism, combining the more plausible aspects of both while avoiding their main problems. Property dualists agree with materialists that all substances are physical, but they hold that physical substances can have both physical and mental states. An example of a physical state is being 6 feet (1.8 m) tall; an example of a mental state is being in pain, or in love. While pain may be caused by the firing of neurons in the brain, and the sensation of being in love by hormones released into the blood, philosophically these mental states do not consist only of the physical events.

French mathematician and philosopher René Descartes (1596–1650) noted that mental and material states coexist in human beings, and proposed that the mind and brain exist separately, each in its own right (a theory known as substance dualism). But Thomas Huxley (1825–95) proposed that the conscious mind is an epiphenomenon (a secondary phenomenon), an aspect of what he called automatism. In an article titled “On the Hypothesis that Animals are Automata, and its History” (1874), he likened it to the whistle on a steam locomotive: the action of the engine is absolutely necessary for the whistle to exist, but the whistle itself contributes nothing to the work of the engine, emerging from it as a discrete phenomenon.

Nineteenth-century epiphenomenology had great influence on behaviorists, but eventually went out of fashion in the mid-twentieth century. However, more recently property dualism has enjoyed new popularity with those who favor a materialist view of the human brain and consciousness but reject reductive physicalism. It also has important links with the experimental study of the psychology of higher mental processes. JF

1874

The Hedonistic Paradox

Henry Sidgwick

The philosophy that happiness cannot be attained by actively pursuing it

The Hedonistic Paradox is a sobering principle that asserts that anyone who seeks happiness for their own sake will always be denied it; that the way to happiness is not to be found in the selfish pursuit of gratifying the self, but only truly comes as a by-product of helping others. This “paradox of happiness” was first noted by the English philosopher Henry Sidgwick (1838–1900) in his book The Methods of Ethics, published in 1874. Sidgwick understood, as did the Epicurean Greeks of the third century BCE, that the pursuit of happiness had within it some inherent problems: that “if you don’t achieve what is sought, then you lose by frustration; if you do achieve what is sought, you lose by boredom.” Whichever way you go, happiness appears an elusive thing. The Epicureans always sought the optimum, rather than the maximum, degree of pleasure, a state they called ataraxia—peace of mind. Drink too much wine, and all one is left with is a hangover.

“The principle of Egoistic Hedonism … is practically self-limiting.”

Henry Sidgwick

The failure to obtain pleasure by seeking it is well documented in literature. Aristotle asked, “How then is it that no one is continuously pleased? Is it that we grow weary?” The utilitarian philosopher John Stuart Mill said, “Ask yourself whether you are happy, and you cease to be so.” The Austrian psychiatrist Viktor Frankl claimed happiness to be ephemeral and elusive, and even when captured was an “unintended side effect of one’s personal dedication to a greater cause.” Søren Kierkegaard warned that we always run right past it, such is the rate of our furious pursuit, while the great Brazilian novelist Joao Guimaraes Rosa said happiness could only be found “in little moments of inattention.” BS

1874

Subatomic Particles

George Stoney

The postulation of the existence of unseen particles that are smaller than atoms

A colored bubble chamber photograph showing tracks left by subatomic particles from a particle accelerator at CERN, the European particle physics laboratory at Geneva, Switzerland.

The discovery in the nineteenth century that the mass of the universe consists wholly of unseen, subatomic particles (particles smaller than atoms) was to occur in an incremental rather than instantaneous way. The discovery process actually took more than a hundred years and was dependent upon the invention of tools that would enable subatomic particles to be found.

“Could anything at first sight seem more impractical than a body which is so small that its mass is an insignificant fraction of the mass of an atom of hydrogen?”

J. J. Thomson, physicist

In 1800 came the discovery of infrared radiation by the German-born British astronomer William Herschel (1738–1822), and in the following year the German chemist Johann Ritter (1776–1810) observed that silver chloride went from white to black when placed at the dark end of the sun’s spectrum, thus proving the existence of ultraviolet radiation. The veil drawn over the world of subatomic particles was thus lifted a little, but it was not until the end of the century that humankind learned more. The electron was detected in 1897, the alpha particle in 1899, and the gamma ray (high-energy photon) in 1900. Some subatomic particles would hold out longer still. The proton avoided detection until 1919, and the neutron was only discovered in 1932.

Although it was British physicist J. J. (Joseph John) Thomson (1856–1940) who first detected the electron in 1897, its existence had already been postulated in 1874 by Irish physicist George Stoney (1826–1911). Stoney calculated the magnitude of electrons in the course of his research on the electrolysis of water and the kinetic theory of gases. He would actually coin the term “electron” in 1891. It was Stoney who took the first speculative leap forward in the recognition of this entirely unseen world when he proclaimed the electron as being the “fundamental unit quantity of electricity.” He was certain that electricity must be comprised of indivisible particles of atoms of equal size, and he went on to provide the first ever estimates of their electrical composition. JMa

1875

Dictatorship of the Proletariat

Karl Marx

A political state in which the working class has control of political power

Karl Marx wrote his “Critique of the Gotha Program” in the later years of his life, during which he was beset by what he called “chronic mental depression.”

Prussian-German philosopher, historian, and revolutionary socialist Karl Marx (1818–83) first coined the phrase “dictatorship of the proletariat” in a series of articles published in London in 1850. His main development of the idea was couched in an attack, published in 1875 and titled “Critique of the Gotha Program,” on the principles of the German Workers Party. In the document Marx actually refers to the “revolutionary dictatorship of the proletariat.” The Critique was clearly written in acknowledgment of the rise of the proletarian Paris Commune, which had ruled that city for a brief time in 1871.

“Capital is an historical necessity, but, so too, its grave digger, the socialist proletariat.”

Rosa Luxemburg, “The Junius Pamphlet” (1916)

Although Marx used the word “dictatorship,” it is unclear whether he meant the word in the sense of a group holding absolute power, or merely in the older sense of dictatura, a term used in the ancient Roman republic to mean possession of political power, or simply “rule.” The distinction made between a dictatorship and a democratically elected representative government is a modern development.

In classical Marxist theory, the dictatorship of the proletariat is seen as a necessary phase of transition after a socialist revolution, bridging the gap between class rule by the capitalist bourgeoisie and the abolition of all class distinctions that Marx believed would naturally occur in a socialist (or communist) society. But when Russian Marxist-Leninists came to plan their revolution in the 1890s, the idea of dictatorship of the proletariat gave way to a belief that the proletariat should be led by those who could correctly interpret communist ideas; opponents of communism would have no say in government. Thus, Russian communist rule turned out to be the dictatorship of a political party, rather than of a class as Marx had envisaged. Perceptions of Marxism are heavily colored by the version adopted by the Soviet Union, so it is the Soviet version of the concept that most people remember. JF

1876

Telephone

Alexander Graham Bell

A machine enabling people to talk directly to each other over long distances

On March 11, 1876, Alexander Graham Bell publicly demonstrated his remarkable invention.

Humans are social animals, but the natural desire to be able to speak to individuals who are geographically distant was frustrated until 1876, when what seemed an impossible dream became a reality. Yet the birth of what was understood as the “speaking telegraph” was by no means without controversy because many individuals claimed a role in its invention.

The telephone (the name is derived from the Greek for “distant voice”) developed from the electrical telegraph, an early nineteenth-century invention by Samuel Morse (1791–1872) and others. This device used electrical signals conveyed along wires to transmit individual letters that together formed a message. Useful though it was, the telegraph lacked the immediacy and depth of nuance of human speech, but in the second half of the century inventors developed acoustic telegraphy, capable of transmitting multiple messages along a single wire at the same time by using different channels or audio frequencies. From this development it was but a simple step to create a working telephone with a mouthpiece that converted the sound of the human voice into electrical signals, and a receiver that converted the signals back into sound. For the first time, people could offer information, or friendship, to others no matter how distant they were.

Who invented the telephone is still debated, as many different people were working in competition with each other at the time. The man who is usually credited is Scottish-born inventor Alexander Graham Bell (1847–1922), whose Patent 174,465 was the first to be issued by the U.S. Patent Office in Washington, D.C., on March 7, 1876. Returning to his laboratory in Boston, Bell finally managed to get his telephone to work three days later, on March 10. He uttered the immortal words, “Mr. Watson, come here. I want to see you,” into the transmitter, and his assistant, listening to the receiver in an adjoining room, heard the words clearly. SA

1877

Phonograph

Thomas Edison

A means of reproducing recorded sound, otherwise known as the gramophone

Thomas Edison, photographed with a commercial descendant of his 1877 cylinder phonograph.

Since the beginning of the universe, it had been true that every sound, once it had faded, was gone forever. Musical performances, however magnificent, were lost, and no one could keep a record of a loved one’s voice; only listeners’ memories kept sounds from oblivion.

In 1877, U.S. inventor and businessman Thomas Edison (1847–1931) helped to bring that situation to an end by inventing a machine to transcribe messages for use in telegraphy. His mechanic, John Kruesi (1843–99), built a working model for him during the late summer or autumn of that year, and in December he showed the machine to staff of Scientific American, which published an article on it. Edison filed for a patent in the same month, which was granted in February 1878.

Further development of the gramophone was left to others, including Scottish-born inventor Alexander Graham Bell (1847–1922) who, with Charles Tainter (1854–1940), designed a prototype using a wax cylinder incised by a floating stylus. Bell and Tainter suggested a joint development of the idea with Edison, but the latter refused, preferring to market his own version of the phonograph, via a new company founded in 1887.

The phonograph was used for speech recordings and as a dictaphone. By the early part of the twentieth century, however, the rotating cylinder had given way to the disk, or record, which produced better quality recordings of music. The rest of the century saw new versions of the gramophone, with 78 rpm (revolutions per minute), 45 rpm, and 331/3 rpm models, culminating in the digital technology of the compact disk player.

Edison’s invention represented the first method of recording sound for playing back at a later time, a truly revolutionary device, and one that changed both the experience of sound, particularly musical sound, and the expectation of its hearers. The phonograph could restore to life, as U.S. journalist Ambrose Bierce put it, “dead noises.” JF

1877

Animation

Charles-Émile Reynaud

Images are made to seem to move when viewed using a remarkable machine

A poster advertises Charles-Émile Reynaud’s Pantomimes Lumineuses at the Musée Grévin, Paris (c. 1900).

The invention of photography in 1822, and that of the phonograph in 1877, revealed for the first time that sights and sounds could be captured exactly for later enjoyment. Humanity was becoming aware that, with ingenuity, no aspect of the physical world need be lost. With hindsight, it was only a matter of time before the movement of objects in space would be captured, too.

The animation of images had some early precursors, such as the magic lantern, in which a light shone through a translucent oil painting was projected onto a wall. However, projected animation was introduced by French science teacher Charles-Émile Reynaud (1844–1918) with his invention in 1877 of the praxinoscope, in which a strip of pictures was placed around the inner surface of a spinning cylinder. The praxinoscope was more functional than its technological predecessor, the zoetrope. Rather than watching the images through the zoetrope’s narrow viewing slits, the user watched an inner circle of mirrors as they reflected the inner surface of the spinning cylinder. The rapid succession of images would appear to be moving, in a way that was clearer than the zoetrope had accomplished.

“Animation can explain whatever the mind of man can conceive.”

Walt Disney, animator and movie producer

In 1888, Reynaud created a large-scale system based on the praxinoscope, called the Théâtre Optique. This was able to project the moving images of the praxinoscope onto a screen. The inventor first demonstrated his system in 1892, for the Musée Grévin in Paris. Reynaud had produced three short movies, and the triple-bill showing, which he called the Pantomimes Lumineuses, was the world’s first instance of projected animated cartoon movies. JE

1877

Social Darwinism

Joseph Fisher

An extension of Darwin’s theory of evolution to social and economic relations

The term “social Darwinism” was first used by a Joseph Fisher in an article, “A History of Landholding in Ireland,” published in 1877, and was popularized by the historian Richard Hofstadter (1916–70) in his book Social Darwinism in American Thought (1944). Always a term of disapprobation, it has over the years been linked, generally scornfully, with economic free marketeering, Mormon social philosophy, and Malthusian socioeconomic theory, among other ideas. In the nineteenth century, it was appropriated by Europeans wanting to use Darwin’s theories of evolutionary struggle to explain the rise to prominence of the white or Caucasian race over African and Asian races.

In the late nineteenth century, this pseudoscience gained credence across much of the Western world. It drove the “civilizing” mission of European imperialists as they invaded “backward” areas of the world, appropriated native people’s land and possessions, and imposed colonial rule. Along with the idea of eugenics invented by Francis Galton in 1883, itself an offshoot of social Darwinism, it informed German policies in SouthWest Africa in the early years of the twentieth century, and ultimately fed directly into Nazi Aryanism.

“There is hardly a good thing in [the world] that is not the result of successful strife.”

Edwin Lawrence Godkin, The Nation magazine (1877)

In its day, social Darwinism commanded almost universal acceptance and enthusiasm, although the Catholic Church decried its espousal of inequality and lack of compassion. Even after scientists were able to show that the genetic differences between racial groupings are no greater than those between different individuals who belong to the same race, the prejudices it had engendered remained. JF

1878

Pragmatic Theory of Truth

Charles Sanders Peirce

A belief is likely to be true if holding it is of use to the believer

The pragmatic theory of truth holds that the truth or validity of a belief depends on how useful that belief is, or what the outcome of holding that belief would be. One of the earliest and most influential defenders of this theory was the U.S. pragmatist Charles Sanders Peirce (1839–1914), who defended a version of the theory in various issues of Popular Science Monthly magazine and in a subsequent book, How to Make Our Ideas Clear, all published in 1878.

“The opinion which is fated to be ultimately agreed to by all who investigate, is what we mean by the truth, and the object represented in this opinion is the real.”

Charles Sanders Peirce, Popular Science Monthly (1878)

In How to Make Our Ideas Clear, Peirce distinguished between scientific beliefs that can be tested using the experimental methods of science, and metaphysical beliefs that cannot be tested using these methods. Peirce identified a true belief to be the sort of belief on which ideal investigators using scientific methods would eventually converge, given sufficient time. By this definition, many traditional metaphysical (or religious) beliefs would turn out to be neither true nor false, and therefore “meaningless.”

The U.S. second-generation pragmatist William James (1842–1910) broadened Peirce’s definition by proposing that a belief was true if it was “expedient” for the person believing it. This allowed for the possibility that metaphysical and religious claims could be true, but it also suggested that truth was relative to a particular individual, particular time, or particular place, rather than absolute.

The pragmatic theory is rarely defended today in the forms envisioned by Peirce or James, but it is an important precursor to modern theories that identify a belief’s truth with its conditions of verification—what experience would show this belief to be true?—or with warranted assertability conditions—under what conditions would a person be justified in asserting this belief? Some modern thinkers with a strong pragmatist bent include Michael Dummett, Richard Rorty, Hilary Putnam, and Jürgen Habermas. BSh

1878

Predicate Logic

Gottlob Frege

A method of reasoning that led to the creation of modern logic

A digitally colored photograph of Gottlob Frege, c. 1920, who said, “Every good mathematician is at least half a philosopher, and every good philosopher at least half a mathematician.”

German mathematician, logician, and philosopher Friedrich Ludwig Gottlob Frege (1848–1925) began his career specializing in the geometric representation of imaginary forms on a flat plane. However, it was not long before his thoughts began to turn increasingly toward logic and its relationship to mathematics. Frege felt that mathematics and logic were fundamentally entwined, and that mathematics, in the end, reduces to pure logic. As a result of his efforts, he virtually invented, in 1878, the concept of predicate logic; it was the single biggest leap forward in the philosophy and approach to logic since the work of Aristotle.

“I hope I may claim in the present work to have made it probable that the laws of arithmetic are analytic judgments … arithmetic thus becomes simply a development of logic.”

Gottlob Frege

Predicate logic is the idea that new information can be gleaned from two or more preceding pieces of information. In its simplest form, predicate logic can be seen in the following three sentences:

1) All donkeys are mammals

2) Harvey is a donkey

3) Harvey is a mammal.

The argument is purely logical because the structure, the choice of wording, of the first two statements (called the “logical structure”) is designed to convince the listener to accept a proposition.

Frege did not use the phrase “predicate logic” (it was introduced much later) to describe what he was struggling toward: a new system of logical notation, a “language” for the expression of logical theory. It was a language with its own syntax and semantics, which he applied, in turn, to constructing a new platform of principles so exacting that they could be used for propositions never before considered by logicians. Frege recorded his new approach to logic in his landmark book, Begriffsschrift (Concept Notation, 1879), little read prior to 1950, partly because of its author’s eccentric and extensive notations. Nevertheless, the use of symbols and formulas in logic today is due in no small part to Frege’s pioneering work. BS

1879

Psychology Laboratory

Wilhelm Wundt

The University of Leipzig opens the world’s first facility for studying psychology

Wilhelm Wundt (seated) is photographed with fellow psychologists at Leipzig University in c. 1910.

In 1862, German physiologist and psychologist Wilhelm Wundt (1832–1920), the “Father of Experimental Psychology,” offered the first-ever course in scientific psychology, at the University of Heidelberg. In 1879, now at the University of Leipzig, he established the world’s first “psychology laboratory,” an academic environment devoted to the research and study of experimental psychology. It was an initiative that marked the beginning of modern psychology (although the laboratory would not become “official” until 1885, when the university at last recognized it).

Across the Atlantic in Massachusetts, the U.S. psychologist William James (1842–1910) had managed to create a psychology laboratory twelve months before Wundt, at Harvard University Medical School, but James is generally not credited with being first because he used his laboratory purely for teaching purposes, rather than for experimentation. Wundt owed much to the University of Leipzig’s administrators, who permitted him to use an empty room to store the equipment he had been using in his lectures, and it was in that room that he first started his series of experiments.

Ever since the publication in 1862 of his book Contributions Toward a Theory of Sense Perception, Wundt—the first person in history to be called a psychologist—was determined that psychology would break free from being simply a branch of philosophy and become an independent discipline, a new science that would construct its own set of doctrines through the trial and error of rigorous empirical experimentation. He also determined early on that not all the work would take place within the laboratory. Wundt’s new science would bring in religion, the social sciences, language, the study of historical records, and the recording of field observations, all of which would contribute toward what Wundt called a sort of “scientific metaphysics.” BS

1879

Symbolic Logic

Gottlob Frege

A formal logical system that uses symbols to represent relationships between propositions

Symbolic logic has its roots in earlier logical systems, such as the syllogistic logic created by Aristotle in the fourth century BCE, yet those earlier systems invariably used words and languages to formulate arguments. In contrast, symbolic logic is a formalized language of arguments and reasoning for making and analyzing claims. It is formalized in the sense that it uses different symbols to represent declarative statements, such as “birds are animals,” and other symbols to represent operations or functions, such as “and,” “or,” “if,” and “then.” Like all of mathematics, its use is not dependent upon the user’s language.

In the eighteenth century German philosopher Gottfried Leibniz (1646–1716) made an attempt at creating a symbolic system of logical reasoning. In the nineteenth century, English mathematician, philosopher, and logician George Boole (1815–64) published influential works on mathematical and algebraic logic. Yet it is German mathematician, logician, and philosopher Friedrich Ludwig Gottlob Frege (1848–1925) who is widely regarded as the inventor of modern symbolic logic after the publication of Begriffsschrift (Concept Notation) in 1879.

Both mathematics and philosophy are dependent on the use of logic. The dream for many philosophers had long been to create a logical notation system that would quickly reveal the validity or invalidity of an argument simply by looking at it and understanding the symbols, just as mathematicians do when looking at a mathematical equation. With the introduction of symbolic logic, philosophers could delve into the logical structure hidden behind classical arguments, allowing them to solve some of the more troubling philosophical conundrums, such as St. Anselm’s ontological argument for the existence of God. More pragmatically, symbolic logic would also become the basis for computer programming. MT

1879

Volapük

Johann Martin Schleyer

A universal language invented in the hope of unifying humankind

Volapük was a language invented in 1879 by German priest Johann Martin Schleyer (1831–1912). Schleyer claimed that God had appeared to him in a dream and told him to construct a universal language that would be easy to learn. His goal, he said, was to establish a means of communication “capable of expressing thought with the greatest clearness and accuracy,” with adjectives, verbs, and adverbs regularly formed, and words written as pronounced, thus minimizing difficulties in spelling.

“With respect to money, weights and measures, time zones, laws and language, the brothers and sisters of the human race should move toward unity.”

Johann Martin Schleyer

Schleyer wanted to create a phonetic alphabet and decided to form words composed of only one syllable, words that would be clearly audible. The result, however, was that few Volapük words suggested anything recognizable as words known to speakers of English, German, or Latin, despite their roots being drawn from the existing Roman alphabet and other primary European languages. “Vol”, for example, came from “universal,” and “pük” from “language.” The letters “Q” and “W” were not used, and the letter “R” was barely seen anywhere because Schleyer believed that pronouncing it would be difficult for speakers of Mandarin and Cantonese. “Animal” was “nim,” and “rhetoric” was “pükofav.”

Despite the scientific and literary communities showing little interest, the new language gained a wide following throughout the 1880s, with Volapük societies and periodicals emerging across Europe. The first gathering designed to promote the language convened in Vienna in 1882, and three Volapük congresses would later meet, in 1884, 1887, and 1889. But by the end of the decade, interest was waning. It did not help that, in 1889, Schleyer refused to accept the legitimacy of the newly elected president of the Volapük Academy, the French professor Auguste Kerckhoffs (1835–1903). By the mid-1890s, the movement had all but collapsed. BS

1880

Historian’s Fallacy

Matthew Arnold

Writing history as though the participants could know what lay ahead of them

A portrait of British poet and culture critic Matthew Arnold, taken in about 1883. In The Study of Poetry (1880), he described the historian’s fallacy as a “natural” error.

The idea behind the “historian’s fallacy” was introduced by British poet and critic Matthew Arnold (1822–88) in his work The Study of Poetry, published in 1880. Referring to the study of historical antecedents in national literary styles, Arnold pointed out the logical error of using hindsight to attribute a sense of causality or foreknowledge of important historical events to the people who lived through them, when in reality they may not have had overall perspective. In the twentieth century, U.S. academic David Hackett Fischer (b. 1935) took up Arnold’s theme in his book, Historians’ Fallacies: Toward a Logic of Historical Thought (1970). The originator of the actual phrase “historian’s fallacy,” Fischer admonished fellow historians to remember that the people of the past had no knowledge of the future.

“No man is free from the logic of his own rational assumptions …”

David Hackett Fischer, Historians’ Fallacies (1970)

Historians falling prey to the fallacy, also known as “retrospective determinism,” use information unavailable to historic individuals to judge their decisions, often unfairly. They might argue, for example, that if the designers of the Titanic had known the type of hull damage that an iceberg strike would cause, they would not have made the walls on the lower levels stop short of the ceilings, enabling seawater rapidly to flood the ship. Similarly, if the firemen and police had understood that the buildings of the World Trade Center would collapse so catastrophically quickly in 2001, they would not have entered them.

In contrast, a historian might adopt a “view from nowhere” and strive for objectivity by ignoring their own knowledge of subsequent events and using only what the historic individuals would have known. As the traditional saying goes, “Hindsight is 20/20,” giving those who came later a false advantage over the people who lived through the original events, who had little possibility of knowing what the greater consequences of their actions would be at the time when they were making their decisions. PBr

1880

War Is Good

Helmuth von Moltke

War reinforces the virtues of society and makes better people of the participants

Helmuth von Moltke, center clean shaven, pictured with his staff in 1871.

It was in a letter of 1880 to the international law expert Johann Kaspar Bluntschli (1808–81) that German Field Marshal Helmuth von Moltke (1800–91), chief of staff of the Prussian army for three decades, explicitly asserted the idea that “war is good” in the modern era. Von Moltke, who is also known for his contribution to the evolution of military strategy, believed that war was important for the advancement of morality. “Without war,” he argued, “the world would sink into a swamp of materialism.” Von Moltke believed that the practice of waging war could be moral (in the sense that there was a ethical way to fight a war), and also that war itself was an exercise of morality.

Von Moltke’s idea that war is good came from his belief that loyalty and courage are among the highest virtues of humankind, and that virtue can only be achieved through practice. War offers an opportunity for sacrifice according to a sense of duty, one that is not available during a time of peace. Thus, the idea that war is good rests on the notion that war is the only route by which human nobility and character can reach its highest form. Wars are often the source of glory for those who fight in them, and for good reason societies generally celebrate the acts of valor and virtue that have been demonstrated by warriors in battle.

Von Moltke argued that without war people within a society become lazy. A good society contains within it people who are ready to sacrifice themselves for the greater good. War provides proof that there is a greater good worth fighting for, and it also gives people the opportunity to exercise the virtues that are associated with self-sacrifice. Von Moltke’s doctrine of “war is good” goes some way to explaining why wars, and those who fight in them, are celebrated by many societies. PB

1880

The Grand Inquisitor

Fyodor Dostoyevsky

An examination into whether the church has an infantilizing effect on believers

The Grand Inquisitor (1985) by Ilya Glazunov, an illustration for Dostoyevsky’s The Brothers Karamazov (1880).

The chapter titled “The Grand Inquisitor” is set within The Brothers Karamazov (1880), the great philosophical novel on God, morality, and free will by Fyodor Dostoyevsky (1821–81). Told in the form of a parable, the chapter both advances the novel and engages the reader in a self-contained investigation of how much or how little humanity really does want to be free.

The parable concerns Alyosha, a young novice monk, whose brother, Ivan, tells him of the time Jesus Christ appeared in Seville, Spain, during the Inquisition (1478–1834). Christ is recognized as the Messiah and is worshipped and adored, but despite healing the sick and performing other miracles he is arrested and led to the Grand Inquisitor. The church leader informs Christ that his presence in Spain is interfering with the church’s mission; the church and its entrenched hierarchy were doing very well without him. Christ is at first sentenced to be burned at the stake, but is then released and told never to return.

The Inquisitor denounces Jesus, claiming that he was wrong to reject the temptations placed before him by the devil during his time in the wilderness. Refusing his first two temptations—to turn stones into bread, and to throw himself off a high point in Jerusalem so that the angels would lift him up—was bad enough, but refusing the third—to receive the kingdoms of the earth to rule over—was an error. Christ should have accepted that one because people cannot manage the “gift” of free will. Rejecting the devil’s temptations, the Inquisitor says, guarantees human beings free will, a burden they are ill-equipped to carry. People want security, not freedom; trying to follow Christ will only give them a lifetime of angst.

The Inquisitor says it is the church’s job to remove the burden of freedom and replace it with certainty. Readers are left to wonder whether they, too, have come to let the church think for them. BS

c. 1880

Impressionism in Music

France

A French school of composition reflecting Impressionism in the visual arts, which emphasized atmosphere and mood over emotion and story

French composer Ernest Chausson turns pages for friend and fellow musician Claude Debussy, at the Chausson family’s house in Luzancy (Seine-et-Marne), August 1893.

The painting Impression: Sunrise (1872) by Claude Monet (1840–1926) suggested to a critic the name of one of the most influential artistic movements in history. “Impressionism” implied a contourless and incomplete style of painting, as opposed to the prevalent highly detailed approach. A century earlier, Scottish philosopher David Hume (1711–76) had defined “impression” and “idea,” two basic types of perceptions: impression is the lively yet vague one “when we hear, or see, or feel, or love, or hate, or desire, or will.”

“You do me a great honor by calling me a pupil of Claude Monet.”

Claude Debussy, composer

Applied to music from around 1880—particularly that of Emmanuel Chabrier (1841–94) and Claude Debussy (1862–1918)—the term connects to the visual arts. In Debussy’s orchestral music, his innovative orchestration techniques create a lush and somewhat blurred effect, evoking an intimate and mysterious atmosphere. Many titles of Debussy’s works, such as La Mer (The Sea, 1905) and Nuages (Clouds, 1899), related to impressionistic motifs from the visual arts. Others, such as the Javanese-inspired Pagodes (Pagodas), conveyed impressions of exotic places. The early Impressionists also partly abandoned major/minor mode tonality in favor of a freer approach incorporating medieval church modes and other scales.

Consideration of the orchestra as a palette of sound colors would have profound impact on later composers, including Igor Stravinsky and Olivier Messiaen, and also on composers of movie scores, including John Williams. Impressionistic techniques lend themselves particularly well to the movie medium, in which music often supports the image, and vice versa, emphasizing orchestral colors rather than melodic themes. Impressionism’s influence was also striking in jazz, especially in the use of modal scales for improvisation from the 1950s onward, but also in the Debussy-like lyricism in the music of pianists such as Bill Evans. PB

1881

Universal Healthcare

Otto von Bismarck

A welfare state should step in to help workers in ill-health and old age

The establishment of the world’s first welfare state originated not in philanthropy but in hard-nosed self-interest. In 1881 the German chancellor, Otto von Bismarck (1815–98), was aware of the growing power of Germany’s social democrats, which he viewed as a threat to the monarchy and his own entrenched authority. What he needed was a measure that would swing the populace in favor of the conservatives, led by himself. Conscious of the need to placate his party members, and not himself wholly convinced about the need to establish laws to protect workers at their place of employment, von Bismarck nonetheless opened the debate with a speech to parliament in November 1881.

The term he chose to describe his new social agenda, Practical Christianity, consisted of what he called “sickness funds,” which covered approximately one in ten German workers, mostly those in industries such as metalworks, shipyards, railways, and power generation plants. The benefits included sick pay, a death payment, free pharmaceuticals, and certain services in hospitals and clinics. What was once only a vague principle had at last become something concrete, to be paid for through government revenues.

The world’s first welfare state was announced in 1883 with the introduction of the Health Insurance Bill, followed by the Accident Insurance Bill a year later. Health reform may well have been the German Chancellor’s least favorite piece of social legislation, but its significance as a step forward in the development of various universal healthcare schemes around the world cannot be overestimated. Von Bismarck had set an unstoppable juggernaut in motion. It was all made possible by Europe’s rapid industrialization, which could generate the revenues necessary to facilitate, and later expand, healthcare reform. BS

1882

The Death of God

Friedrich Nietzsche

Objective moral principles do not exist, and so God, their source, does not exist

German philosopher Friedrich Nietzsche (1844–1900), in his book The Gay Science (1882), declared, through the mouth of a madman, the death of God. Nietzsche used this literary, polemic tool because he intended to show that if God were truly dead, then his madman was in fact a true prophet and the wisest of all men.

Nietzsche’s attack started with his belief that there are no objective moral principles, and ended with him saying that, since these principles were always seen as the most important descriptors of God, God, therefore, must not exist. Nietzsche felt that when people saw that there were no objective moral principles, they would, poetically speaking, murder God and begin to move into a post-Christian era. Indeed, in the years following Nietzsche’s death, the West became gradually less theistic. In fact, even some Jewish and Christian theologians (so-called “theothanatologists”) followed Nietzsche, trying to find room for religion without God and objective truth.

“God is dead. God remains dead. And we have killed him. Yet his shadow still looms.”

Friedrich Nietzsche, The Gay Science (1882)

Despite its massive influence, Nietzsche’s declaration was not much more than that. As an argument it was weak because he moved from an empirical fact, that different cultures exhibit different particular laws, to the hasty conclusion that no general moral principles or laws exist. Rational persons might disagree about the morality of abortion, but no rational person would disagree that simply killing a child for fun is always and absolutely wrong. And if objective moral principles, such as “It is always wrong to kill a child for fun,” exist, then it is certainly possible that an absolutely good God exists as well. AB

1883

Genetic Determinism

August Weismann

Genetics might account for the complexity of life, rather than environment alone

A strand of DNA, which contains the genetic code for the development and function of living organisms.

German evolutionary biologist August Weismann (1834–1914), delivering a lecture titled “On Inheritance” in 1883, was the first to propose that the development of life on Earth was as much the product of genetics as of the environment. He argued that it was the genetic makeup of organisms, their genotypes, not merely the environment, that helped to determine their structures, patterns, shapes, and colors, as well as their features, observable characteristics, and physical traits (phenotypes); even their behavior was genetically determined. Taking into consideration the natural selection theory of Charles Darwin (1809–82), Weismann taught the concept of “germinal selection,” which holds that genes are the source of our morphology, our agents of change. It was an idea that went unchallenged for more than a hundred years.

Genetic determinism was an advanced theory for its time, but the sequencing of the human genome, completed in February 2001, sounded the death knell for Weismann’s bold idea. The paucity of discovered genes (30,000 compared to an anticipated 100,000) led to the shocking realization among biologists that only 300 genes, for example, distinguished a human being from a mouse.

In the light of this evidence, genetic determinism fails to account for human complexity. But while it may well have been shown to be a failed scientific paradigm, there is still to this day no commonly accepted alternative. What can be put in its place? Scientists still do not know what prompts cells to change, and still have no theory to make sense of the many contradictions presented by the Human Genome Project (HGP). The HGP proved that there is an overwhelming similarity of genomes across species, and confirmed what some scientists had suspected for decades: that our genes alone cannot explain the vast complexity of life on Earth. BS

1883

Eugenics

Francis Galton

Crafting a higher form of humanity through genetic manipulation

A Nazi measures heads in search of Aryan characteristics in the Soviet movie Ordinary Fascism (1965).

In 1883, British anthropologist Francis Galton (1822–1911), cousin to naturalist Charles Darwin (1809–82), coined the phrase “eugenics” (from the Greek eugenes meaning “well born, of good stock”) to describe the human quest for betterment through selective reproduction. “Eugenics,” he said, “is the science of improvement of the human race germ plasm through better breeding, [the] study of agencies under social control that may improve or impair the racial qualities of future generations, whether physically or mentally.”

Galton felt that humankind’s natural evolutionary processes were being hindered by philanthropic gestures to a genetically inferior underclass that perpetuated the inefficient “weak links” in heredity and got in the way of natural selection. Galton’s answer to this was to introduce an artificial means of selection to help circumvent this derailing of the natural order. Eugenics spread around the world and gained new adherents, particularly in the United States with the establishment of the Station for Experimental Evolution in 1904 and the Eugenic Record Office in 1910.

One primary manifestation of a belief in eugenics was racism. In the early twentieth century in the United States, “race consciousness” was heightened by eugenics; procreation suddenly became a public issue, with eugenics-inspired legislation concerning segregation, the forbidding of certain marriages, and sexual sterilization. Eugenics also inspired increased rates of abortion in support of social ideals.

The association of the Nazi Party in Germany with the darker principles of eugenics proved to be the pseudoscience’s death knell. By the mid-1940s, eugenics across the world had been all but abandoned. The dream, however, has persisted. With the mapping of the human genome and the advent of genetic engineering, eugenics, dressed up and reimagined, is about to enter a new and uncertain future. BS

1884

Pointillism

Georges Seurat

A painting style that abandoned brushstrokes for tiny, luminescent dots

A Sunday Afternoon on the Island of La Grande Jatte (1884, detail) by Georges Seurat.

It took two years for the French painter Georges Seurat (1859–91) to complet his acclaimed work A Sunday Afternoon on the Island of La Grande Jatte in 1884. In that time he would often visit the little island at the entrance to Paris in the middle of the Seine, sketching the people who frequented it so to make his work as authentic in detail and form as possible. He experimented with new pigments, such as zinc chromate, which produced deep yellows for the island’s sun-dappled grass. More innovatory, instead of using long brushstrokes, Seurat chose to compose his painting using tiny dots of color. When viewed from a distance, the individual, uniformly sized dots combined to produce an image in wholly different colors, far more vivid than could ever be achieved with traditional brushstrokes. By combining colors optically, rather than mixing his pigments, he had invented a new form of painting, known initially as divisionism. Seurat himself named his method “chromoluminarism,” the ability to achieve maximum luminosity from color. In the late 1880s, however, art critics who thought they knew better began to call it pointillism; and the name stuck.

“Some say they see poetry in my paintings; I see only science.”

Georges Seurat

La Grande Jatte was a large painting, 6½ x 9¾ feet (2 x 3 m) in size, and was the first ever to be composed entirely in the pointillist tradition. When displayed at an Impressionist exhibition in 1886 it caused a sensation, a challenge to the great Impressionists Pierre-Auguste Renoir and Claude Monet, and launched a new direction in art: Neo-impressionism. Paul Signac, a contemporary of Seurat, described pointillism as “a means of expressing the optical mixture of tints and tones.” BS

1884

Intentionality

Franz Brentano

The characteristic of consciousness whereby it is conscious of something

Historically, the term “intentionality” was used for the first time in the tenth century by Anselmus Genavae (c. 1033–1109), archbishop of Canterbury, in an ontological argument about the existence of God. It was later used by St. Thomas Aquinas (c. 1224–74) in the thirteenth century when he tried to describe the process by which all living things thrust themselves out into the world. Its modern usage, however, dates to the German philosopher and psychologist Franz Brentano (1838–1917), who defined it in his book, Psychology from an Empirical Standpoint (1884), as a characteristic present in every conscious act.

The word itself was medieval and derived from the Latin word intendere, meaning to be directed toward a goal or thing. The concept of intentionality relates to the “aboutness” of things—for example, the sentence that cats are animals is about cats (and also about animals). Something that is about (or represents) something else is said to “have intentionality,” or (in the case of conscious states, such as believing, desiring, thinking, hoping, perceiving, and fearing) is said to be an “intentional mental state.”

“Only some, not all, mental states and events have Intentionality.”

John R. Searle, Intentionality (1983)

“Intentionality” should not be confused with the common meaning of the word “intention.” It is not an indicator of one’s imminent thoughts or actions, but rather of the way in which our brains are capable of understanding our environment through the process of sensory input and perception. Intentionality is a tool that today’s sociologists, psychologists, and anthropologists have used to explain how all sentient creatures perceive the world around them. BS

1885

Subjectivist Fallacy

Friedrich Nietzsche

Whether people believe in something is irrelevant to whether or not it is true

The subjectivist fallacy is a recently identified fallacy in the history of ideas. One older instance of it is found in Beyond Good and Evil (1885) by Friedrich Nietzsche (1844–1900), but its earliest identification in literature is unclear. One possibility is that it arose along with the U.S. pragmatist response to problems facing the modernist project. As those ideas filtered into literary criticism and legal studies, it is possible that relativism influenced culture more broadly, culminating in an increase in the fallacy’s frequency.

Also known as the “relativist fallacy,” the subjectivist fallacy is an error in reasoning in which an arguer assumes that, because it is possible to disagree about the truth of a claim, either it is irrational to hold that claim or any attitude toward that claim is rational. For example, if one person asserts that abortion is immoral and someone else responds, “Well, not everyone agrees, so that can’t be right,” the latter has committed the subjectivist fallacy. Similarly, if someone claims that communism is immoral and someone else responds, “Well, you have a right to your opinion, but there are differing perspectives on the subject,” the latter has committed this fallacy.

“Talk of subordinate theories … is meaningful but only relative[ly] …”

W. V. O. Quine, Ontological Relativity (1969)

According to the law of noncontradiction, no proposition can be both true and false at the same time; thus, the fact that people may hold different opinions about the truth of a claim is irrelevant to the truth of that claim. The relevant questions are whether there is any evidence on which to form a judgment, and if so, whether that evidence is strong enough to justify belief in that claim. JW

1886

Dr. Jekyll and Mr. Hyde

Robert Louis Stevenson

A monumental tale contrasting good and evil, respectability and lust

Dr. Jekyll feels the effect of his potion in a dramatic depiction by U.S. illustrator Howard Pyle (1895).

Is it a detective story, fable, religious allegory, gothic novel? Literary critics have long discussed what genre, or mix of genres, Strange Case of Dr. Jekyll and Mr. Hyde (1886) by Scottish novelist Robert Louis Stevenson (1850–94) might belong to, but, regardless of its tag, this classic tale of one man’s struggle with the darker side of his nature is justly acclaimed.

The 1880s was a golden decade for the author, with the release of Treasure Island in 1883 followed by Kidnapped in 1886. But Dr. Jekyll and Mr. Hyde had a darker purpose and seemed to capture an increasing sense of pessimism, at least in artistic circles, about where the British Empire stood ethically after decades of rampant colonialism and the technological progress and hubris that had helped to make it possible.

“Sir, if that was my master, why had he a mask upon his face?”

Robert L. Stevenson, Dr. Jekyll and Mr. Hyde (1886)

The second draft of the book (Stevenson burned the first draft, written at a feverish pace after waking from a dream, because its theme so disturbed his wife) concerns a good and decent Londoner beset by an internal struggle of his own making. The erudite, outwardly respectable Dr. Henry Jekyll, who for the better part of his life has tried to suppress myriad evil urges, invents a potion able to separate the good and bad aspects of his character, but which brings forth from Jekyll’s innermost being the murderous Mr. Edward Hyde, with gruesome consequences. Over time, Jekyll’s character loses its battle to keep Hyde in check, and Hyde grows in influence and power until Jekyll becomes reliant on the potion simply to survive. The book is a classic study of the duality in us all, of our outward respectability and inward yearnings. BS

1886

Masochism

Richard, Baron von Krafft-Ebing

Sexual arousal resulting from subjection to the will of another, even to receiving pain

A caricature of Leopold von Sacher-Masoch (1890), from whose name the word “masochism” derives.

The term “masochism” was brought into common usage by Austro-German psychiatrist Richard, Baron von Krafft-Ebing (1840–1902) in his book Sexual Psychopathy: A Clinical-Forensic Study (1886), although it existed beforehand. Fernanda Savage, translator of the novella Venus in Furs (1870) by Austrian writer Chevalier Leopold von Sacher-Masoch (1836–95), referred to Sacher-Masoch as “the poet of the anomaly now generally known as masochism. By this is meant the desire on the part of the individual affected of desiring himself completely and unconditionally subject to the will of a person of the opposite sex, and being treated by this person as by a master, to be humiliated, abused, and tormented, even to the verge of death.” Sacher-Masoch’s own relationships with women paralleled the one depicted in his book, in which a man signs a contract with a woman to make him her “slave.”

“Nothing sexual is depraved. Only cruelty is depraved, and that’s another matter.”

Marilyn French, The Women’s Room (1977)

Masochism is sometimes seen in conjunction with sadism, with the participants alternating the giving and receiving of pain. The “master” dominates the “slave,” usually in a rigidly controlled environment designed to protect the masochist, even while he or she is subjected to sometimes violent objectification.

One of the best-known books of the erotic genre, The Story of O (1954) by French author Anne Desclos under the pen name Pauline Réage, relates the story of a young woman initiated into the lifestyle. The popularity of works such as E. L. James’s Fifty Shades of Grey trilogy (2011) indicates a trend toward a normalization and acceptance of this more extreme type of interpersonal relationship. PBr

1886

Sadism

Richard, Baron von Krafft-Ebing

Sexual pleasure or satisfaction derived from inflicting pain on another

The idea that some individuals may derive sexual pleasure from acts other than intercourse was one of the areas of human behavior scrutinized by the new science of psychology in the nineteenth century. In his book Sexual Psychopathy: A Clinical-Forensic Study (1886), Austro-German psychiatrist Richard, Baron von Krafft-Ebing (1840–1902) examined writings, novels, and plays of French nobleman Donatien Alphonse François, Marquis de Sade (1740–1814), in which the aristocrat detailed the pleasure he had derived from inflicting pain on others. As a result of von Krafft-Ebing’s study, the term “sadism” passed into common usage.

De Sade’s debauched lifestyle included the exploitation of prostitutes, hired servants, and other innocents. He was eventually imprisoned for his crimes and also spent time in a mental institution, during which time he continued to write and where one of his plays was performed by the inmates. A play by German author Peter Weiss (1916–82), The Persecution and Assassination of Jean-Paul Marat as Performed by the Inmates of the Asylum of Charenton Under the Direction of the Marquis de Sade (1962), alludes to the fact that during the French Revolution (1787–99), when Sade was freed for a time, he became part of the Revolutionary government and admired Marat, one of its leaders.

“When she’s abandoned her moral center … she is never more beautiful to me.”

Marquis de Sade, writer

Sadists generally prefer an unwilling victim and so, although masochists enjoy being abused, sadists do not always choose them for the practice. Sadism is found in varying degrees, from satisfaction derived from the public humiliation of an enemy to actual criminal behavior culminating in rape and/or murder. PBr

1886

Marginal Revenue Productivity

John Bates Clark

Examining how much firms can afford to pay an employee or contractor for their labor

A team of horses and their handlers transport a locomotive by road in c. 1860.

Businesses based on the capitalist model need to know whether their workforce is the correct size and appropriately paid in terms of the health of the business. In the nineteenth century, various economists took a mathematical approach to the question. The issue became important in the British industrial economy after the repeal in 1846 of the Corn Laws, which had kept the price of bread, and therefore wages, artificially high.

John Bates Clark (1847–1938) presented the marginal revenue productivity theory of wages in his book The Philosophy of Wealth (1886). The theory argues that a firm can afford to pay for a unit of labor only if it contributes at least the cost of buying it to the firm’s income. The term “marginal revenue” (MR) describes the change in revenue that results from the addition of a unit of input where other factors remain equal. According to the law of diminishing returns, at a certain point investment ceases to yield an optimum return, and the return decreases for every extra unit of input.

“ … the distribution of income to society is controlled by a natural law …”

John Bates Clark, Distribution of Wealth (1889)

Wages are mainly determined by the law of supply and demand, but the theory of marginal revenue productivity (MRP) also introduces the efficiency or productivity of the worker into the equation. The productivity of a worker—the amount of product he or she can turn out, and therefore the profit made by the employing firm, known as marginal physical product (MPP)—makes that worker worth more to the company, which allows the firm to pay the worker a higher wage. The economic equation used—MR x MPP = MRP—reveals the point up to which it is in a firm’s interest to take on more workers. JF

1887

Voluntarism

Ferdinand Tönnies

The sociological view that the will takes precedence over emotion and intellect

The term “voluntarism” was coined by German sociologist Ferdinand Tönnies (1855–1936) in his work Gemeinschaft und Gesellschaft (Community and Civil Society, 1887). Generally, it refers to the primacy of the will over intellect or emotion. Tönnies used the concept to refer to the ways in which people freely associate with one another, and specifically to the significance of both natural will (Wesenwille) and “rational” will (Kürwille).

For Tönnies, these expressions of the will are evidenced in two corresponding dimensions of social life. Gemeinschaft refers to “community,” which includes the natural bonds of family and embedded cultural identities, such as those stemming from religion and vocation. Gesellschaft refers to “society,” and involves relationships and commitments derived from the pursuit of personal interests and the achievement of goals external to a person’s more fundamental communal identity. The mandates and expectations involved in “community” are expressions of the natural will and are thus regulated from within the community. The means and ends involved in “society” are formulated by self-interest expressed through public opinion and regulated through mutual legislation.

“The wills of human beings interact in many different ways …”

Ferdinand Tönnies, Community and Civil Society (1887)

This distinction between two types of will continued to influence the social sciences in the twentieth century. As socioeconomic development caused communities to grow, the tension between social engagement as an end in itself and social engagement as merely a means to ends external to that engagement grew. Tönnies’s voluntaristic insights apply to the oft-perceived divide between organic and artificial dimensions of society. JD

1887

Master Morality and Slave Morality

Friedrich Nietzsche

A definition of morality as being either proactive or reactive

A sixteenth-century painting of Odysseus and Circe, by Bartholomaeus Spranger. Greek heroes such as Odysseus epitomize Nietzsche’s concept of the “Noble Man.”

A central theme of On the Genealogy of Morality, published in 1887 by German existential philosopher Friedrich Nietzsche (1844–1900), is that there are two fundamentally different kinds of morality. There is “master morality,” which exists in the so-called “Noble Man,” who determines morality for himself and does not require any outside approval as to its rightness; and there is “slave morality,” which is the morality of the oppressed.

“These two moralities are not simple inversions of one another.”

Friedrich Nietzsche

Slave morality is negative and reactive; it sees the forces that are arrayed against it and replies, “No.” The master morality of the Noble Man, on the other hand, takes little heed of what is outside it or beyond it, and is concerned primarily with how to live in the present. Few things disturb the Noble Man, whose lifestyle affords him the luxury of self-indulgence, together with ambivalence about the tribulations of “weaker,” more numerous humans, born to suffer as they try in vain to make sense of slave morality. The Noble Man is the creator of his morality; the slave, conversely, merely responds to it.

For Nietzsche, resentment of the Noble Man is the source of the subversive natures of “weak” slave moralists, whom he regarded as evil. The hero of Greek poet Homer (eighth century BCE) was the epitome of a strong-willed Noble Man, and the ancient civilizations of Greece and Rome were founded on the principles of classic master morality. Nietzsche believed that the standard moral systems of his time were an expression of slave morality—particularly utilitarianism and Christianity, which both promoted the good of the community, not the individual, as an aid to survival. For Nietzsche, ideals such as equality and democracy were the opposite of what humans truly valued; instead, he argued that the strong have always preyed upon the weak, and societies do not exist for the good of all. BS

1888

The Secret Doctrine

Helena Blavatsky

A late nineteenth-century synthesis of science, religion, and philosophy

Helena Blavatsky, photographed at Ithaca, New York, in 1875, the year she cofounded the Theosophical Society with military officer, journalist, and lawyer Henry Steel Olcott.

The Secret Doctrine was published in two volumes in 1888 by Russian-born Helena Blavatsky (1831–91), self-proclaimed clairvoyant, master of theosophy, and cofounder, in 1875, of the Theosophical Society in New York. The first volume, Cosmogenesis, told how the universe and all sentient beings came to exist; the second, Anthropogenesis, gave a series of explanations for the evolution of mankind that were at considerable variance with virtually all other scientific theories at the time.

“Magic being what it is, the most difficult of all sciences … its acquisition is practically beyond the reach of the majority of white-skinned people.”

Helena Blavatsky

The book contains a number of fundamental propositions: there is an omnipresent, eternal, immutable principle that transcends human thought; there are numberless universes and endless planes of existence; there is an obligatory pilgrimage that all souls must make in accordance with karmic principles; living beings can be found everywhere, from inside subatomic particles to the grandest of universes and all of them possess consciousness; and human races have come from seven quite separate evolutionary beginnings, each occurring in a different region of Earth, on landmasses that no longer exist—there is, for example, an Atlantean race, and also a race originating in “The Imperishable Sacred Land,” which is located somewhere near the North Pole.

The absurdity of Blavatsky’s book is now self-evident, but at the time it was not entirely ridiculed. Its publication coincided with an increased interest in both the occult and in esoteric pursuits such as gnosticism, astrology, and Christian mysticism. Despite its impressive length, it contains no fresh insights into the nature of God; it is a collection of previously published religious texts laced with Blavatsky’s own questionable, subjective, unsubstantiated interpretations. Blavatsky claimed theosophy was ancient and universal, which begs the question why it was left until 1888 for her to reveal it to a presumably grateful world. BS

1888

What Does Not Kill You Makes You Stronger

Friedrich Nietzsche

Hardship provides a necessary opportunity for the cultivation of superlative human beings

German philosopher and poet Friedrich Nietzsche in about 1870. He was interested in the enhancement of individual and cultural health, and his philosophies have inspired many.

The idea that “what does not kill you makes you stronger” was first articulated by German existential philosopher Friedrich Nietzsche (1844–1900) in his work, Twilight of the Idols (1888). The idea is mentioned in passing as an aphorism, but it is representative of the virtue ethic that Nietzsche developed throughout his writings.

“From the Military School of Life—Whatever does not kill me makes me stronger.”

Friedrich Nietzsche, Twilight of the Idols (1888)

Nietzsche’s philosophy is a character ethic focused on the cultivation of the Übermensch (superlative human being). All things in the world are manifestations of “will to power,” which is the natural tendency of things to strive toward empowerment and excellence. The Übermensch is the culmination of this process, a living work of art that represents the epitome of human accomplishment. Nietzsche rejects utilitarianism and deontology because these theories proscribe certain acts as universally wrong while saying nothing about what one must do to become an excellent human being. Suffering, for example, though temporarily painful, often helps in the cultivation of a superlative character, which is why it cannot be condemned as necessarily evil—if it does not destroy us, it has the potential to make us stronger. For Nietzsche, self-cultivation is a process of constantly assessing, demolishing, and rebuilding the self, and anything that assists in this process, no matter how painful in the short term, is a boon. The idea is embodied in the notion of amor fati (love of fate): if a person wishes to affirm the quality of their character, they must acknowledge the value of all of the things that made it, both the pleasurable and the painful, the good and the bad.

Although “What does not kill you …” has become a well-known aphorism, it is not commonly attributed to its author. Nietzsche believed that all philosophy is autobiography, and his views on this issue were probably motivated by the personal tragedies and ill-health that he had to endure during his lifetime. JM

1888

Vector Spaces

Giuseppe Peano

A mathematical concept that views things as vector spaces and fields, not objects

Italian mathematician Giuseppe Peano (1858–1932), a founder of mathematical logic and set theory, defined the vector space in his work Geometrical Calculation According to the Ausdehnungslhere of H. Grassmann, published in 1888. Mathematically, a set of vectors form a vector space provided that it meets several conditions, such as closure under addition, the associative law, the commutative law, and the distributive laws.

“Now any mathematical quantities which can be added to give new quantities of the same nature may be represented by vectors in a suitable vector space with a sufficiently large number of dimensions.”

Paul Dirac, physicist

As an abstract way of extending Euclidean geometry’s application to physical space, the vector space permits the scientist to assign numbers to given states of things at a certain time, and to calculate the results of forces acting on those things into the future. For example, suppose the things are physical particles: when multiple forces simultaneously act on a particle at a point, each represented by a vector, the resulting combined force is easily calculated because forces combine as vectors add. A collection of points forming a field with infinite degrees of freedom can be acted upon by numerous simultaneous forces, and the subsequent behavior of the field can be predicted.

After calculus, no mathematical technique is more useful across science than vector spaces and the linear algebra used with them. Modern theoretical physics—from classical mechanics, fluid theory, electromagnetic field theory, solid state physics, and particle physics, all the way to special and general relativity and quantum physics—relies extensively on them. English theoretical physicist Paul Dirac (1902–84) found vector spaces crucial for handling the principle of superposition and other problems in quantum mechanics. Even disciplines such as neuroscience make use of vector spaces. In 1989, U.S. philosopher Paul Churchland (b. 1942) suggested that mental states represented as sentences should be replaced by a neural network theory in which brain states are handled by vector spaces for the flow of neural representations. JSh

1889

Cultural Relativism

Franz Boas

Beliefs, customs, and ethics are relative to an individual in his or her own social context

Apache leader Geronimo meets U.S. General George Crook in a cross-cultural council in 1886.

The phrase “cultural relativism” may not have been coined until the social theorist Alain Locke (1886–1954) used it in 1924 to comment on the book Culture and Ethnology (1917) by anthropologist Robert Lowie (1883–1957), but the idea was already decades old. It was Franz Boas (1858–1942), the German–American founder of modern anthropology, who first developed the concept, beginning with his essay “On Alternating Sounds” (1889). Boas regarded individual societies, with all their associated cultural peculiarities, in the same way that naturalist Charles Darwin (1809–82) saw all of the species in the Galapagos Islands, not as any one species being superior to any other, but as all having adapted to their own circumstances. Boas then applied this thought to the science of anthropology, and arrived at the principle of cultural relativism. All cultures were unique, he argued, a product of their own individual histories—not just of ethnicity or environment. These views may seem orthodox to us now, but they were deduced at a time when racism and bigotry in the United States were threatening to cripple anthropology itself, with the discipline almost daring its detractors to label it “pre-scientific.” The fact that Boas was able to rescue it from this milieu is nothing short of miraculous.

Boas abandoned the idea that understanding people meant merely to study what they ate and drank, or their tastes in music and religion, realizing that a person’s beliefs should be seen in the terms of their own culture, not through the prism of our own culture. By using the cephalic index (which measured the width-to-breadth ratio of people’s heads), Boas set out to prove that head sizes of immigrants had altered in only a generation and were not fixed. Boas dented the arguments of anthropology’s racists, such as Samuel Morton (1799–1851) who had claimed that some races were immutable and had a separate creation—a common justification for slavery. BS

1889

Ghost Dance

Wovoka

A Native American ritual dance that, it was believed, would drive away white people

An illustration of Sioux Native Americans performing the Ghost Dance in c. 1891.

A ritual designed to regenerate the earth and unify Native American tribes, the Ghost Dance reached its peak in the final decade of the nineteenth century, when the slaughter of buffalo herds, acquisition of traditional tribal lands, and systematic killing and corralling into reservations of Native Americans was reaching its gruesome height.

In 1889, the dance was introduced into the Northern Paiute tribe, inhabitants of areas in present-day eastern California and western Nevada, by a Northern Paiute spiritual leader, Wovoka (c. 1856–1932). Since his youth, Wovoka had been revered as a powerful medicine man; he was able, so the stories went, to control the weather, bring an end to a drought or flood, and even catch a bullet (which later led to the wearing by the Lakota Sioux of “bullet-proof” Ghost shirts that, it was hoped, would protect them in battle). In January 1889, Wovoka had a prophetic vision about the resurrection of his tribe’s dead, the extinction of the white invaders, and the restoration of Native American prosperity. Trying to keep alive his people’s diminishing hopes for their future, Wovoka taught that salvation would come if his people remained upright and virtuous, at the same time regularly performing the Ghost Dance.

The dance, which itself was a form of round dance, was performed in strict five-day gatherings. The ceremony rapidly spread from the Northern Paiute homelands across Oklahoma and the Great Plains. It was adopted by every tribe that it touched—Arapaho, Apache, Pawnee, Sioux, and Cheyenne, as well as other, smaller tribes. Wovoka’s message was not a call to arms, however. The spiritual leader was imploring his people, so desperate to gain revenge for a litany of wrongs and oppression, to achieve their objectives through dance. The mysterious ceremony seemed threatening to the European colonizers, who barely understood its religious significance. BS

1889

Expressionism in Art

Vincent van Gogh

An art movement that emphasized the importance of self-expression

Vincent van Gogh’s The Starry Night (detail, 1889) externalizes the artist’s inner turmoil.

The Dutch Postimpressionist painter Vincent van Gogh (1853–90) saw the world differently from most of the Impressionist painters who surrounded him. Instead of capturing the light and colors of the natural landscape as a dispassionate observer, as the Impressionists had done, van Gogh looked inside his troubled psyche and discovered a new style of self-expression. Van Gogh’s art provided a mirror for his angst-ridden soul and, years later, it would lead to the formalization of an entirely new kind of painting. Van Gogh once wrote in a letter to his brother, Theo, “Instead of trying to reproduce exactly what I see before my eyes, I use color more arbitrarily to express myself forcibly.”

Van Gogh’s The Starry Night (1889) depicts the view from his sanatorium window at night, but its swirling sky and luminous stars are no faithful representation of what he saw; exaggerated and distorted, they suggest his inner reality. The Starry Night is now seen as a pivotal painting in the march toward Expressionism

“My art is self-confession. Through it, I seek to clarify my relationship to the world.”

Edvard Munch, artist

Four years later came The Scream (1893) by Norwegian painter Edvard Munch (1863–1944), another icon of Expressionism. The painting depicted Munch himself, pausing while crossing a bridge and crying out in desperation from the blur of his anxiety-informed world. Like The Starry Night, the painting has the ingredients of Expressionism—the use of strong, nonnaturalistic colors and distorted lines—many years before the Expressionist movement had its “official” beginnings with the German artistic group Die Brücke (The Bridge), who met together for the first time in Dresden in 1905. JMa

1890

The Binding Problem

William James

The question of how the brain binds data supplied by the senses and memory

The U.S. pragmatist philosopher William James (1842–1910) formulated early versions of the binding problem in two works: The Principles of Psychology (1890) and A Pluralistic Universe (1909). As explained by James, the binding problem concerns the mechanisms by which human minds are capable of experiencing various sensory stimuli as being part of a single, unified “consciousness.” This consciousness, traditionally thought of as the “soul,” seems to serve as a receptacle for experiences such as hearing a noise, seeing blue, and remembering childhood.

Current research on the binding problem did not get underway until around 1980, when psychologist Anne Treisman (b. 1935) proposed the feature-integration theory of attention, which purported to explain how the sensory experiences of the various features of an object could be psychologically “integrated” into an experience of a single object. At around the same time, the biophysicist Christoph von der Malsburg (b. 1942) wrote an article investigating the neural processes allowing the brain to represent the relations between discrete visual stimuli, such as colors or shapes, which are picked up by different neurons.

“No possible number of entities … can sum themselves together.”

William James, The Principles of Psychology (1890)

Neurologists, psychologists, and philosophers of mind have continued to research the binding problem, with somewhat different emphases. Some researchers have focused on specific neural functions, such as the processing of visual information, while others have focused on “big picture” questions, such as the emergence of a “unified” consciousness from a large number of seemingly discrete brain processes. BSh

c. 1890

Psychoanalysis

Sigmund Freud

Human behavior is dictated by unconscious and often irrational desires

Sigmund Freud as a young man, photographed with his fiancée, Martha Bernays; they married in 1886.

Psychoanalysis, pioneered in the 1890s by Austrian neurologist Sigmund Freud (1856–1939), is both a method of treating people with mental illness and a theory about why people act the way they do. Psychoanalytic theory holds that unconscious, irrational drives cause human behaviors, and that many desires are repressed, having been formed during childhood and later become hidden in the human subconscious. People can overcome their psychological problems by understanding what their minds have repressed and by accepting their unconscious desires.

Freud developed psychoanalysis as he studied people with “nervous ailments,” such as “hysteria.” He developed a comprehensive method of understanding human behavior and personality, and their root mental and emotional causes, while also developing a therapeutic method that psychologists could use to treat people suffering from such conditions. Freud’s methodology and ideas became so renowned that his name is still largely synonymous with psychoanalysis.

“The interpretation of dreams is the royal road to a knowledge of the unconscious.”

Sigmund Freud

As one of the first scientific methodologies aimed at dealing with mental illness, psychoanalysis paved the way for a much broader understanding of the human mind. While many of Freud’s psychoanalytic theories have been criticized and dismissed by modern psychologists and medical professionals, his work provided new insights into human nature itself. Psychoanalysis showed that what we think about ourselves can be greatly influenced by forces outside our control, and that our strongest beliefs about ourselves may not be reflective of reality. MT

1890

Freudian Slip

Sigmund Freud

Everyday slips of the tongue provide insights into the workings of the mind

In 1890, in a letter to his friend, the physician Wilhelm Fliess, Austrian neurologist Sigmund Freud (1856–1939) listed numerous examples that he had noticed of a curious tendency by people to utter errors in speech, due perhaps to inattention, incomplete data, or a strong prior-response pattern. In his book The Psychopathology of Everyday Life (1901), Freud later referred to these errors in German as Fehlleistungen (faulty actions). He theorized that what we now call “Freudian slips” might represent the surfacing of an unconscious thought or wish; they were perhaps symptoms, he said, of the ongoing struggle between our conscious view of reality and those things we repress in our unconscious: they are verbal mistakes that reveal repressed beliefs. Freud’s English translator called them “parapraxes,” from the Greek meaning “another action.” Such slips of the tongue, or linguistic faux pas, were random expressions of unconscious processes in otherwise normal individuals.

“My hypothesis is that this displacement … is not left to arbitrary psychical choice …”

Sigmund Freud

For Freud, every little error contained potentiality, whether making a wrong turn while driving a car, dialing a wrong phone number, or misspelling an unfamiliar word. “In the same way that psychoanalysis makes use of dream interpretation,” he once said, “it also profits by the study of numerous little slips and mistakes which people make.” So is it possible that a Freudian slip is nothing more than a mistake or a lapse in concentration? After all, even Freud once told a student who asked him if there were an underlying psychological need to smoke a cigar: “Sometimes, a cigar … is just a cigar.” JMa

1890

News from Nowhere

William Morris

A utopian novel inspired by the socialism and social science of Karl Marx

The frontispiece of William Morris’s News from Nowhere (1890) featured an engraving by W. H. Hooper of Kelmscott Manor, Oxfordshire, Morris’s home from 1871 until his death in 1896.

News from Nowhere or An Epoch of Rest is a utopian romance set in a future Britain, written in the light of Marxist ideals as interpreted by Arts and Crafts artist and socialist thinker William Morris (1834–96). The book was published in 1890.

“A map of the world that does not include Utopia is not worth even glancing at … Progress is the realization of Utopias.”

Oscar Wilde, The Soul of Man under Socialism (1891)

The word “utopia,” coined by Sir Thomas Moore (1478–1535), has a double meaning, being derived from the Greek eu-tropos (a good place) and ou-tropos (no place). This etymology underlines the question demanded by all such descriptions of society: can any utopia ever really exist?

Originally published as a serial in Commonweal, a newspaper of the Socialist League, News from Nowhere tells the story of a Victorian man, William Guest, who is transported to a future world. Journeying along the River Thames, he meets a variety of individuals and groups who are living according to Morris’s interpretation of the Marxist ideal. Capitalism has been abolished in favor of a cooperative socialist society in which private property, the class system, marriage, formal education, poverty, and most crimes do not exist. In the community of the future, everyone is happily engaged in activities necessary for the continued functioning of society. Work is consciously done for the benefit of all and for that reason it is fulfilling and creative.

Central to Morris’s depiction of Britain in the twenty-first century is the concept of returning to a pastoral, pre-industrialized age, in which factory towns such as Manchester have ceased to exist and state control has disappeared. News from Nowhere was written as a response to Looking Backward: 2000–1887 (1888), a science fiction novel by U.S. author Edward Bellamy (1850–98). Bellamy’s work is set in a technologically advanced future in which human labor has been reduced by increased mechanization and the state works to make life easier. PBr

1891

Subsidiarity

Pope Leo XIII

The belief that government should only do what individuals or private groups cannot

An edition of the daily Parisian newspaper Le Petit Journal, featuring Pope Leo XIII on the front cover. It is dated August 15, 1891, shortly after Pope Leo XIII issued his Rerum Novarum.

Although the idea of subsidiarity only entered the popular lexicon in the twentieth century, it was already old and well developed in Western political thought well before that time. The term “subsidiarity,” derived from the Latin subsidium, means “support, help, or assistance.” The concept asserts that a matter, situation, or problem ought to be handled by the lowest or least centralized authority capable of handling it effectively. The principle of subsidiarity was first formally presented in the Rerum Novarum (1891) of Pope Leo XIII (1810–1903). This encyclical on capital and labor was primarily concerned with working conditions and was an attempt to articulate a middle course between laissez-faire capitalism and the various forms of communism.

“[Subsidiarity] is a fundamental principle of social philosophy, fixed and unchangeable, that one should not withdraw from individuals and commit to the community what they can accomplish by their own enterprise and industry.”

Pope Pius XI, Quadragesimo Anno encyclical (1931)

The principle of subsidiarity was further developed in the Quadragesimo Anno encyclical of Pope Pius XI (1857–1939) in 1931, written in response to the rise of totalitarianism, and it was also influential in the Solidarity (Solidarność) movement in Poland that emerged in the 1980s, led by Lech Wałęsa. Subsidiarity is now best known as a foundational principle of European Union (EU) law, which means that the EU may only make laws or otherwise act in cases in which the action of individual member countries proves insufficient to settle a matter satisfactorily. Subsidiarity was formally established in EU law by the Treaty of Maastricht, signed by the twelve member states in 1992 and enforced in 1993.

Subsidiarity is now an organizing principle of decentralization within the EU, and represents a careful and influential attempt to balance the maintenance of the autonomy of member units with a recognition of their practical imperfections when it comes to achieving certain ends. The concept has also found applications in the fields of political science, cybernetics, management, and the military. JE

1891

Kinetoscope

William Dickson

An object that enabled a single viewer to enjoy a show of moving images

A man watching a film on a kinetoscope, through a peephole window at the top of the device.

Following the invention of the camera by Thomas Edison (1847–1931), he and his employed assistant, inventor William Dickson (1860–1935), started work on the kinetoscope, a machine that allowed a single viewer to watch through a magnifying viewfinder as up to 50 feet (15 m) of moving looped film ran through the machine, providing the illusion of moving images. Edison conceptualized the invention, while Dickson was responsible for its practical development. The prototype was completed in 1891. From 1893, viewers could watch footage of circus-style tricks for a nickel in Edison’s specially built Black Maria film studio, or kinetographic theater. The studio was also used to make the photographic strips; it was built on a revolving platform to catch the correct amount of sunlight from the retractable roof.

Similar technology went into Edison’s kinetograph, an early motion-picture camera. The motion picture was improved upon by the Lumière brothers in 1895; they named their version the cinématographe. It combined a camera, printer, and projector, and was far more lightweight than Edison’s kinetograph. Another distinction of the cinématographe was the Lumières’ decision to utilize intermittent movement in the design, whereas Edison had attempted to perfect continuous movement. In the event, Edison’s company, unable to succeed in producing a workable motion-picture projector on its own, purchased Thomas Armat’s phantascope instead in 1896, renaming it the vitascope. The Vitascope went on to become the first commercially successful motion-picture projector in the United States.

Edison’s idea, made real by William Dickson and improved by the Lumière brothers, gave rise to the technology that has underwritten a number of our contemporary technologies. Without them our world would be a dramatically different place. JE

1891

Artistic Symbolism

Paul Gauguin

Art that features symbols to express the artist’s inner world, dreams, and spirituality

A detail of Paul Gauguin’s Where Do We Come From? What Are We? Where Are We Going? (1897–98).

In March 1891, an article by French Symbolist poet and art critic Albert Aurier (1865–92) appeared in the periodical Le Mercure de France. Titled “Le Symbolisme en Peinture—Paul Gauguin,” it described the psychologically complex but visually simplified and nonnaturalistic painting style of the French artist Paul Gauguin (1848–1903) as “artistic Symbolism.”

Symbolism in the arts was already long established, having been inaugurated by the poetry of Charles Baudelaire (1821–67). The Symbolist movement in the visual arts had been inspired by the French Symbolist writers and poets of the 1880s, who believed that words paralleled nature rather than imitated it. Artistic or pictorial Symbolism was a reaction against realism, which attempted a exact and direct representation of nature. The Symbolist artists saw subjective expression as a more valid approach to representation than objective reproduction of exact forms. Their works contained elements of dreams and the imagination, presented to the viewer through recognizable symbols. The artists used line, broad strokes of color, and simplified form to reflect and emphasize their inner emotional lives.

Gauguin and Spaniard Pablo Picasso (1881–1973) were outstanding artists of the movement. Gauguin traveled to the South Sea Islands and produced numerous Symbolist works there, including the painting Where Do We Come From? What Are We? Where Are We Going? (1897–98), which demonstrates the standard elements of his visual style: darkly outlined forms, flat areas of bright color, and an idiosyncratic use of personal symbols. Gauguin likened his paintings to Symbolist poems as defying easy explanation. The visually simplified but psychologically dense works of Picasso’s “Blue Period” (1901–04), also featuring dark outlines but with a narrow palette of blue and blue-green, are also examples of the Symbolist style. PBr

1892

Sense and Reference

Gottlob Frege

A theory that distinguishes between the sense and reference of signs in language

The German philosopher and mathematician Gottlob Frege (1848–1925), in his paper “Über Sinn und Bedeutung” (On Sense and Reference, 1892), explained how the words “sense” and “reference” represent two different aspects of a term’s meaning. Sense usually refers to how we perceive an object or the degree of information we are given about it, while reference is the indicated object itself. Frege, however, objected to the notion that the meaning of a term is nothing more than its constituent reference, and postulated that a proper name is also composed of what he called its sense, in that it possesses an aspect which differentiates it from another object that has the same name. An example would be, “The leader of the United States’ Democratic Party in 2011” and “the president of the United States in 2011”: the two statements are not alike in sense, but the reference, Barack Obama, remains the same.

“The sense of a proper name is grasped by everyone who knows the language …”

Gottlob Frege, philosopher

Frege was interested in establishing whether there is only a relationship between objects, or whether that relationship extends to include the names and signs we attribute to them. People may have differing recollections of sense, such as a feeling or a mood, that are associated with their memory of an object. For example, two people may look at the moon through a telescope but have differing recollections of what they saw, depending upon how obscured their view was and other random variables. Then there is the problem of common names, such as the morning star. For anyone looking at this star, their sense of it will be incorrect if they are ignorant of the fact that the morning and evening star are one in the same. BS

1893

Universal Suffrage

New Zealand

All adult citizens have the right to vote, regardless of sex, race, and social status

British suffragettes campaign for women’s rights and the vote in London, 1908.

The idea of universal suffrage, that all adult citizens ought to have the right to vote irrespective of race, sex, belief, wealth, and social status, gained credence with the rise of representative governments in parts of Europe, the United States, and independent Latin American nations. In some countries, the idea of universal suffrage seemed to be an organic extension of representative government, but in many others the concept of universal suffrage was far from intuitive. Consequently, the implementation of universal suffrage often lagged significantly behind the extent to which it had spread in theory.

While some early constitutions guaranteed suffrage to all men (but not women), the founders of most of the first representative systems of government intended representative government to benefit propertied men specifically; in some countries, such as the United States, and even more specifically, it was to benefit white propertied men. Such emphasis placed significant barriers (factual or legal) between the institutions of government and women, the poor, and people who were not white.

“The struggle for woman suffrage [is] no white woman’s struggle …”

Carrie Chapman Catt, woman’s suffrage leader

Universal suffrage, a powerful idea that most now accept as common sense, was long in the making. Votes for women arrived late in the Old World, and were ratified only in 1920 in the United States overall. The first country in which suffrage was extended irrespective of gender was New Zealand, in 1893. Gradually following suit were Australia in 1901, Finland in 1907, and Norway in 1913. Without universal suffrage, the political landscape would look very different today. JE

1893

Anomie

Émile Durkheim

The theory that a dysfunctional labor market could lead to social decay

Sociology’s concept of anomie, first mentioned by French philosopher Jean-Marie Guyau (1854–88) in his book The Nonreligion of the Future (1885), was used to describe, in broad terms, a society operating without any fixed laws. The phrase was later popularized by the French sociologist Émile Durkheim (1858–1917) in The Division of Labor in Society (1893). Here, its definition was broadened to refer to the increasing failure of guild labor collectives to meet the needs of an evolving middle class; inertia was retarding their ability to change and adapt in the industrialized world.

However, Durkheim took the idea one step further, claiming that anomie could progress beyond guild collectives and labor markets and enter society proper. Then all social norms could become confused or unclear, a state he referred to as “normlessness.” Anomie begins, according to Durkheim, with economic deregulation. If left unchecked, it can create a situation in which social norms no longer determine the functioning of a society. In the case of the individual, this “normlessness” or sense of alienation could lead to self-harm, which Durkheim would later describe in his book Suicide (1897).

“The state of mind of the individual who has been pulled up from his moral roots.”

Robert MacIver, sociologist

The U.S. sociologist Robert MacIver (1882–1970) later characterized individuals beset with anomie as having their sense of attachment to society “broken down.” Durkheim himself asserted in Suicide that anomie was capable of creating what he labeled a “permanent disease of industrial society” and the “negation of morality,” in which state we could not moderate our desires or limit our needs and passions. BS

1894

Minimum Wage

New Zealand

A legally enforced minimum amount that any worker can expect to be paid per hour

The idea of a minimum wage was first campaigned for in the 1890s by a group of sweatshop workers (known as “sweaters”) who protested bitterly at the conditions in their Australian workplace. It was in New Zealand, though, that a law recognizing workers’ rights was first introduced in 1894, and only in 1896 did Australia attend to the matter by establishing wage boards. The United Kingdom set up its own, similar system in 1909, and boards were adopted by Massachusetts in 1912. However, it was not until 1938 that the United States established the Fair Labor Standards Act for all workers.

Setting the lowest legal hourly wage payable to any worker served two purposes: it reduced the chances of civil unrest, and it also worked against poverty and created tolerable living conditions for workers in low-paid jobs. In that sense, the legislation was enacted in tacit recognition of the universality of the human right to fair treatment at work. The actual minimum rate is decided either through collective bargaining or by a judgment from government. It varies according to geography, economic conditions, and the workforce’s degree of political power, among other factors.

Globalization has somewhat undermined the political impact of minimum-wage legislation because purely profit-driven industries now move factories from strict to less strict jurisdictions. Opinion has polarized between those who advocate a fair wage as a minimum standard and those who see intervention in the market place as detrimental meddling. Detractors claim that a minimum wage drives up costs and threatens business closures, which in turn cause unemployment rates to rise. Proponents hold that, by protecting a minimum standard of living, the legislation enables those who want to improve their situations to do so. This springs workers from the poverty trap and leads toward the creation of a better educated workforce. LW

1895

X-rays

Wilhelm Röntgen

A type of electromagnetic radiation, used in medicine to pass through soft tissue and provide an image of a hard structure within, such as part of the skeleton

An X-ray of the hand of Wilhelm Röntgen’s wife, Anna, taken in 1896. Anna’s ring, a pair of compasses, and a circular object absorb the X-rays more than skin or bone, being made of metal.

Wilhelm Röntgen (1845–1923) was professor of physics at the University of Würzburg, Germany, when, in 1895, he was passing electron beams, known at the time as cathode rays, through a gas container at very low pressure. He discovered that under the right conditions—involving a darkroom, a darkened discharge tube, a fluorescent screen, and a paper plate treated with barium platinocyanide—some materials appeared to become partially transparent, allowing an image to be created on a photographic plate. Denser materials appeared more clearly on the image than lighter ones; in one image, the hand of Röntgen’s wife revealed her bones and a ring in black, with a light grey shadow for the surrounding flesh. Röntgen named the new phenomenon the X-ray, meaning “unknown ray.” It was many years before scientists learned how the process worked, and neither were the dangers of high-dosage X-rays at first understood.

“Treat the patient, not the X-ray.”

James M. Hunter, quoted in Arnold Melnick, Melnick on Writing (2012)

Substances such as metals and calcium in bones absorb radiation more readily than soft tissue, and so the X-ray quickly became a useful tool for medical examinations. Using X-ray technology, doctors could look for a broken bone or swallowed hard object, and check for cancerous lumps or even lesions caused by tuberculosis. X-rays revolutionized the way in which doctors considered the diagnosis process itself, and opened up many new avenues of scientific exploration.

Röntgen’s discovery laid the foundation for such diagnostic technology as ultrasound and electromagnetic scanners, which look inside the body rather than making deductions from what can be seen on the outside. Throughout the twentieth century, medical practitioners came to rely more and more on technological aids to back up, or even inform, their diagnoses, rather than using experience and the observation of symptoms to arrive at them. JF

1895

Art Nouveau

Siegfried Bing

An artistic style characterized by free form, sinuous line, and organic motifs

An Art Nouveau poster designed in 1898 by Henri Privat-Livemont, advertising Absinthe Robette.

The Salon de l’Art Nouveau, opened in 1895 by art dealer Siegfried (aka Samuel) Bing (1838–1905) in Paris, was the first showcase for the “new” art style sweeping both Europe and the United States from 1890 onward. Before Art Nouveau, the late nineteenth century had been characterized by a balancing act between the strict order and historicism of the Neoclassicists and the emotional and visual chaos of the Romantics. Looking to the natural world but moving beyond it for free-flowing, organic form allowed the practitioners of the “new art” to create graceful works that built on traditional styles but also transformed them.

Some critics trace the visual style back to Celtic manuscript illumination with its interlacing knot patterns, others to the Rococo love of the curvilinear and extreme elaboration. Precursors include the works of English Aesthetic movement illustrator Aubrey Beardsley (1862–98), Arts and Crafts designer William Morris (1834–96), and ukiyo-e Japanese printmakers, such as Katsushika Hokusai (c. 1760–1849).

“ … the curve undulating, flowing, and interplaying with others …”

Nikolaus Pevsner, Pioneers of Modern Design (1936)

In his book Pioneers of Modern Design (1936), Nikolaus Pevsner (1902–83) suggests that Art Nouveau was the transitional style to the modern era. It certainly incorporated many of the philosophical and societal trends of the period from 1890 to 1910. Whether it was a reflection of artists wanting to break free of societal norms or a quest for aesthetic purity removed from moral judgments, the explorations of Art Nouveau touched everything from graphic design to furniture and began the modern era, foreshadowing later modern trends such as abstraction and Surrealism. PBr

1895

Daylight Saving Time (DST)

George Vernon Hudson

A proposal to create more hours of daylight by altering clocks

English-born New Zealand entomologist and astronomer George Vernon Hudson (1867–1946) began collecting insects at the age of nine. In Wellington, New Zealand, he found employment as a shift worker, which left him just enough daylight hours to continue building his insect collection. There was, however, only one problem: in Hudson’s opinion, there were not quite enough daylight hours available for the proper and measured pursuit of his beloved insects. Something had to be done, so in October 1895 he presented a paper to the Wellington Philosophical Society suggesting that it might be prudent to consider a seasonal adjustment in time.

Hudson proposed changing clocks at the equinox, two hours forward in October, and two hours back in March. Although his idea had already been anticipated by the U.S. inventor Benjamin Franklin (1706–90), who proposed the concept in his essay “An Economical Project for Diminishing the Cost of Light” (1784), Franklin’s paper was really more a lighthearted satire than any concrete proposal, and it is generally thought that Hudson’s idea represented the first real attempt to make Daylight Saving Time (DST) a reality. Hudson’s paper, unfortunately, was greeted with disdain. “Wholly unscientific and impractical,” said some; “completely out of the question,” said others, to be considering altering a system that had been working perfectly.

“Everyone appreciates the long light evenings [and] laments their shrinkage.”

William Willett, The Waste of Daylight (1907)

DST was eventually adopted in Germany during World War I (1914–18) to save fuel expended in the powering of artificial lighting. It is now in use in more than seventy countries throughout the world. BS

1895

Multiverse

William James

The concept that our universe is one of many, all of them expanding infinitely

A conceptual artwork showing multiple universes forming from black holes following the Big Bang.

The “multiverse” is a term for all that is real and exists, and has been postulated in astronomy, philosophy, cosmology, and even science fiction ever since it was first coined in 1895 by the U.S. philosopher and psychologist William James (1842–1910), in his essay “Is Life Worth Living?” However, James was referring to the multiplicity of moral choices that humans face, and for him the multiverse was not a concept of cosmology.

The term was nevertheless picked up by astronomers and today it is used to describe the existence of multiple universes, the totality of all physical reality. The multiverse hypothesis puts forward the possibility that the universe we see around us, and that stretches infinitely beyond the boundary of what we are capable of observing, is only one of countless universes, all independent of one another. Together, these encompass all that exists—because they are infinite, there can be nothing beyond, other than, or containing them.

Mathematical models in the field of theoretical physics are increasingly coming down in support of the multiverse theory. The models include string theory and the theory of eternal inflation, the latter of which holds that the expansion of the universe will never end and will inevitably lead to—and has been leading to ever since the Big Bang—a multiplicity of universes. Recent observations in the realm of subatomic reality also indicate the likelihood of a multiverse. But not everyone is convinced. South African cosmologist George Ellis (b. 1939), writing in Scientific American in August 2011, said: “All in all the case for the multiverse is inconclusive,” and is “more a concept than a theory.”

Empirical evidence for the multiverse will forever elude us, because all that we can measure and observe is ensconced well within the universe that we inhabit. The multiverse is likely to remain nothing more than a tantalizing conundrum. BS

1895

Crowd Psychology

Gustave Le Bon

The creation of a branch of social psychology, focused on crowd behavior

College students collectively raise their arms and shout at a job-hunting ceremony in Tokyo, 2011.

Theory on the behavior of crowds, going back as far as Plato, originally assumed that crowd behavior was that of an unthinking mob. Substantive study of crowds in the social sciences was reinvigorated by The Origins of Contemporary France (1875/1893), by the conservative historian Hippolyte Taine (1828–93). But it was in The Psychology of Crowds (1895) that French sociologist Gustave Le Bon (1841–1931) first mined the writings of existing theorists on crowd behavior to create the new discipline of crowd psychology.

Le Bon listed three primary elements of crowd behavior, including, first, a unity of collective identification, giving a sense of limitless power; second, the creation of a sensitivity to emotional appeals due to that unity; and third, collective intelligence in the crowd dropping to that of the lowest common denominator. Crowds, said Le Bon, are easily subject to collective hallucinations, suggestions originated by individuals in the crowd that are thoughtlessly and contagiously adopted throughout the whole.

Le Bon’s theory of crowd psychology received little significant challenge until the later works of sociologists such as George Rudé (1910–93) and E. P. Thompson (1924–93). Thompson’s studies of the actual behavior of crowds focused primarily on the social context and demands of crowds, while Rudé looked at the composition of existing crowds. Their studies challenged views of the crowd as essentially primal and irrational, and instead showed crowds as often being composed of relatively better-off members of communities who are responding to specific threats to their communities, at the same time acting on cultural assumptions that are widely shared.

The study of the psychology and behavior of crowds had long been merely speculation before Le Bon, whose influential studies integrated the study of crowd behavior into formal social science. JE

1895

Nobel Prize

Alfred Nobel

An annual series of awards in five categories designed to reward human excellence and make the world a better place, underwritten by vast profits generated by war

Alfred Nobel’s wealth derived primarily from his work on explosives. In addition to dynamite, he invented and patented gelignite, and ballistite, improved by the British as cordite.

“When two armies of equal strength can annihilate each other in an instant, then all civilized nations will retreat and disband their troops.” So said Swedish chemist, engineer, and inventor Alfred Nobel (1833–96), the man who gave the world dynamite, just one of his 355 patents. Nobel was deluded in thinking that his dynamite could bring an end to warfare, but he also—to “relieve his conscience”, as Albert Einstein would later say—bequeathed the fortune he had made from making dynamite and other activities to the world in support of the ideals of the Nobel Foundation.

“The whole of my remaining realizable estate shall … constitute a fund, the interest on which shall be annually distributed in the form of prizes to those who, during the preceding year, shall have conferred the greatest benefit upon mankind.”

Alfred Nobel’s Last Will and Testament (1895)

When Alfred’s brother Ludvig died in 1888, a Paris newspaper erroneously published the obituary it had prepared for Alfred under the heading “The Merchant of Death Is Dead.” The armaments manufacturer, stung by the denunciation, became determined to leave a legacy for good. In his final will, Nobel allocated monies from his vast empire to be divided annually among five recipients whose work had been distinguished in the categories of physics, chemistry, physiology and/or medicine, literature, and “one part to the person who shall have done the most or the best work for fraternity between nations, for the abolition or reduction of standing armies and for the holding and promotion of peace congresses”—the Nobel Peace Prize. It would be five years after his death before the first awards were given, due to the organizational and bureaucratic difficulties involved.

There have been a few errors along the way. In 1926 the Nobel Prize in Medicine went to Danish scientist Johannes Fibiger (1867–1928) for his “discovery” that roundworms led to cancer (they do not). But in a world where more than 300 various “peace prizes” are awarded every year, it is still the Nobel Peace Prize that continues to hold aloft the brightest hope for good in a changing and often fractured world. BS

1896

The Baldwin Effect

James Mark Baldwin

The theory that adaptations of behavior can affect evolutionary change

In his paper titled “A New Factor in Evolution” (1896), U.S. philosopher and psychologist James Mark Baldwin (1861–1934) proposed that individually acquired characteristics might indeed become hereditary through reinforcement by relevant genetic dispositions in a given species. This view helped to create a bridge between neo-Darwinian and neo-Lamarckian schools in consideration of natural selection.

The standard explanation of natural selection refers to random mutations producing physical traits advantageous for survival and procreation. Baldwin proposed that capacities for behaviors entirely contingent on particularities of the organism’s environment were also a factor. If, for example, an organism were able to develop effective cooperative relationships, and such relationships were necessary for the survival of the species as a whole, then its ability to engage in cooperative activity would add a selective pressure for genetic predispositions in other organisms. Those organisms that were able to express this advantageous behavior would pass on their “new” capacities, to the eventual extent that such capacities could become embedded as instinctual in the species.

Shifts in a society’s natural infrastructure, or even its cultural values, may also provide selection pressures that make certain behavioral capacities beneficial. One of the more prevalent criticisms of this notion is that, as such factors become more complex and socially contingent, there is not sufficient time for the advantageous capacities relating to them to become selected in a species. This leads to broader discussions concerning the extent to which the adaptations of organisms evidencing the Baldwin Effect are hereditary or non-hereditary—and thus how distinct Baldwin’s account of natural selection, with the Baldwin Effect addition, is from Darwin’s “standard” model. JD

1896

Fascism

Gaetano Mosca and Maurice Barrès

An authoritarian, totalitarian political ideology exalting national and/or racial identity

At the end of the nineteenth century, European nations faced an unsettled sense of purpose and a perceived crisis of civilization that prompted a reevaluation of political organization. In his work The Ruling Class (1896), Italian lawyer and political scientist Gaetano Mosca (1858–1941) asserted that, while governments claim to represent the majority, society is almost invariably ruled by an organized minority of elite political actors. This insight promoted an elitism that, while still applicable within democracy, led to an increased empowerment of nationalistic leaders in Italy. Concurrently, the writings of French novelist and politician Maurice Barrès (1862–1923) promoted ethnic nationalism and authoritarianism as the hope of reestablishing identity, stability, and power to France. Barrès’s work also discussed the charismatic embodiment of a nation’s spirit in its leadership, common in fascist rulers.

“The individual is nothing, society is everything …”

Maurice Barrès, novelist and politician

Fascism emphasizes the supremacy of the state and the subordination of individuals to a national identity and purpose. Often, this occurs alongside a sense of victimization, grievance, or a perceived cultural decline. Fascist regimes mobilize social and economic development around the purpose of rebuilding strength and identity. The totalitarian and authoritarian stance of fascism entails that its methods for pursuing national strength consistently involve displays of power and violent suppression of political opponents. Three of the most prominent European fascist regimes of the twentieth century were led by Benito Mussolini in Italy, Adolf Hitler in Germany, and Francisco Franco in Spain. JD

1896

Form Follows Function

Louis Sullivan

What happens inside a building should be manifested by its exterior structure

Frank Lloyd Wright’s Fallingwater House (1938), with cantilevered concrete echoing natural rock.

In his article “The Tall Building Artistically Considered,” published in Lippincott’s magazine (No. 57) in 1896, U.S. architect Louis Sullivan (1856–1924) wrote the following: “It is the pervading law that all things organic, and inorganic, of all things physical and metaphysical, of all things human and all things superhuman, of all true manifestations of the head, the heart, of the soul, that the life is recognizable in its expression, that form ever follows function. This is the law.”

Sullivan’s original intent, especially in designs such as the Carson Pirie Scott store (1899, 1903–4) in Chicago, was that the purpose of a building should be manifested by its exterior structure and ornament. For Sullivan, ornament should not merely imitate past styles, but be a new amalgam of geometry and nature.

“Form and function should be one, joined in a spiritual union.”

Frank Lloyd Wright, architect

The truncated phrase “form follows function” was incorrectly co-opted by Austrian Adolf Loos (1870–1933) in 1908, and used to justify the International Modernist aesthetic (post World War II), which stripped buildings of their ornament, reducing them to flat, linear boxes.

U.S. architect Frank Lloyd Wright (1867–1959) would second Sullivan’s intention in his own architectural practice with his “total design” conceptions for domestic and business buildings. For example, in Fallingwater (Kaufmann Residence), the siting over a waterfall and the opening and closing of certain doors and windows help to circulate cool air throughout the structure. Wright morphed organic shapes to geometric forms in projects as diverse as his long, low Prairie-style homes and his cast-concrete, Mayan-inspired, “California Romanza” Hollyhock House. PBr

1897

Christian Fundamentalism

United States

A movement arguing that the Bible is the literal and factually inerrant word of God

By the late nineteenth century, many U.S. Protestants were growing uncomfortable with ideas held by modernist Christian theologians, who held that the Bible was a collected work written by many authors, and one that contained information of an allegorical or poetic, rather than literal, nature. Largely in response to the modernists, and to scientific discoveries that were pushing Christian thinkers to adopt a less supernatural theological view, conservative Protestants embraced what they saw as the fundamental Christian beliefs of the Niagara Bible Conference of 1897.

By 1915, a series of booklets, titled The Fundamentals, had been published about these conservative ideas, giving the movement its name. No single person is associated with originating the concept, but its adherents believed, among other things, that the Bible was literally true, that salvation was only possible through a belief that Jesus Christ died for the sins of mankind, and that only fundamentalists held an accurate view of Christianity, and that Jesus’s imminent return to Earth was readily apparent.

“The world … needs to be turned upside down in order to be right side up.”

Billy Sunday, evangelist

Christian fundamentalism has had a large impact on modern U.S. religious, political, and social institutions. Though there is no one denomination or unifying doctrine, fundamentalists view the inerrancy of the Bible as paramount, and hold it their duty to defend it against all competing ideas. Fundamentalist Christians are largely responsible for the modern Creationism movement, and, beginning predominantly in the 1970s, have played a major role in shaping the modern U.S. conservative political movement. MT

1897

Montessori Education

Maria Montessori

An educational approach emphasizing independent learning through activity and personal discovery, rather than being taught verbally in the classroom

Italian educational reformer Maria Montessori, who evolved the Montessori method of teaching children, is pictured during a visit to the Gatehouse School in London, England, in 1951.

Italian educator Maria Montessori (1870–1952) began developing her educational philosophy and methodology in 1897 and opened her first classroom in Rome in 1907. With a deep integration of child psychology and pedagogy, Montessori built her approach around accommodating and enhancing a child’s natural mental and sociological development. Rather than resort to rote, dictated learning, Montessori education aims to provide a prepared environment within which students have the freedom to engage and learn according to their natural tendencies.

“Education is a natural process carried out by the child and is not acquired by listening to words but by experiences in the environment.”

Maria Montessori

A central feature of Montessori education is the enabling of student choice among educational activities in a mixed-age classroom setting. This setting is prepared with simplicity, cleanliness, and order, and with materials and opportunities that fit with students’ “planes of development.” The planes of development are Montessori’s categories of progressive human growth, including birth to six years, six to twelve years, twelve to eighteen years, and eighteen to twenty-four years. Montessori education aims to tailor pedagogy to these planes of development, always emphasizing free exploration within a prepared environment.

The constructivist or “discovery” framework of Montessori education is one of its most distinctive and challenging features. By empowering students to choose their own path through the educational landscape, learning is developed through a student’s trial and error, experimentation, and activity, alongside the teacher’s creation of an engaging environment. This approach resists the desire to standardize education, delineate a uniform curriculum, and create tests and measurements of information bases. Constructivist philosophies of education such as Montessori’s retain a certain independence from the “one size fits all” mentality of some other educational systems. JD

1898

Connectionism

Edward L. Thorndike

An attempt to explain human behavior using artificial neural networks

Connectionism is a theory that states that behavioral responses to certain stimuli are established by a process of trial and error that affects neural connections between those stimuli and the most satisfying responses to them. It is applied in psychology, neuroscience, and linguistics to stimulate cognitive processes, such as perception, memory, learning, and motor skills. The crux of connectionism is that the mind operates through a variety of simple and uniform units that form parts of a great network. For example, in the brain, the units are neurons and the connections between them are synapses.

Connectionism was formerly known as parallel distributed processing (PDP), a concept that was prefigured in the works of the earliest psychologists. The first proponent of connectionism as it is now understood is generally agreed to have been U.S. psychologist Edward L. Thorndike (1874–1949), who discussed it in his doctoral thesis of 1898 (later published in 1911 as Animal Intelligence). His work was developed by two Americans, neurophysiologist Warren McCulloch (1898–1969) and mathematician Walter Pitts (1923–69), whose influential treatise of 1943 likened the brain to a computing machine and each of its component neurons to a simple digital processor.

Although some critics claim that connectionism is reductionist, the theory has been applied in the field of artificial intelligence and used in the construction of robots, which require the following eight functions: processing units; a stimulus to activate them; a means of output; connections between the units; a means of spreading the process (propagation); a method of converting inputs into new forms; a capacity to learn (remember what has happened and act on the basis of experience); and an environment in which all these activities can occur. GL

1898

Space Rocket

Konstantin Tsiolkovsky

The development of a theory for using rocket engines in space

It was on a visit to Paris in the 1880s, while still a provincial school teacher in rural Russia, that Konstantin Tsiolkovsky (1857–1935) looked at the Eiffel Tower and imagined constructing a platform 24,854 miles (40,000 km) high, from the top of which objects could be launched into geosynchronous orbit around the Earth. It was Tsiolkovsky who proposed the first ever theory of rocket propulsion in 1898, and even suggested the use of liquid propellants, an idea finally realized in 1926 by the U.S. physicist and inventor Robert Goddard, who became the first man to launch a liquid-fueled rocket. In 1903 Tsiolkovsky became the first to argue that the exhaust velocity of a rocket’s escaping gases would determine its range and speed—the so-called Tsiolkovsky rocket equation, and in the 1920s he advocated manned platforms in Earth’s orbit to be used as staging posts for interplanetary journeys. He even foresaw the development of the multistage rocket.

“Tsiolkovsky’s … projects did not attract the attention they deserved.”

G. A. Tokaty, aerodynamicist

Tsiolkovsky also wrote science fiction. Unable to confine his enthusiasm for space flight to textbooks and journals, he effusively wrote in 1911: “To place one’s feet on the soil of asteroids, to lift a stone from the moon with your hand, to construct moving stations in ether space, to organize inhabited rings around Earth, moon and sun, to observe Mars at the distance of several tens of miles, to descend to its satellites or even to its own surface—what could be more insane!” Yuri Gagarin may have been the first Soviet citizen and the first human being in space, but it was the largely unheralded ideas of Tsiolkovsky that got him there. BS

1898

Theory of Radioactivity

Henri Becquerel/Pierre and Marie Curie

The scientific study of particle emissions from atoms with nuclear instability

Marie Curie photographed in her laboratory in 1896, two years before her discovery of radioactivity.

From the end of the nineteenth century, the theory of radioactivity developed into one of the most important fields of scientific study in history. In November of 1895, Wilhelm Röntgen discovered x-rays, and months later Henri Becquerel’s (1852–1908) related studies of phosphorescence led him to posit spontaneous radiation as the explanation of the effects of uranium on photographic plates. As other natural elements were discovered to have similar properties of radiation, Marie Curie (1867–1934) coined the term “radioactivity” in 1898 to name the emerging theory of radiation. In 1903, Becquerel, Pierre Curie (1859–1906), and Marie Curie won the Nobel Prize in Physics for their discoveries.

Radioactive decay involves the loss of energy by an unstable atom. This emission was classified early on based on the size and strength of rays observed. Further discoveries revealed that radioactivity was found to occur in terms of particles, such as neutron, proton, and electron emissions, and also in terms of the transitional decay of a nucleus that does not result in transmutation into a new element. The most common example of this latter form is the release of gamma rays, which occur along with other forms of particle decay.

“Radioactivity … is not to be hated and feared, but accepted and controlled.”

Ralph Eugene Lapp, physicist

Radioactivity was further explicated by the work of Ernest Rutherford, who is mainly responsible for understanding atomic structure. The models and postulates provided by atomic theory and the application of radioactive discoveries led to major advances and inventions in the twentieth century, including the harnessing of nuclear energy for power and medical treatments such as chemotherapy. JD

1899

Juvenile Courts

Illinois Legislature

A court devoted to administering justice solely to children and adolescents

Societies have long recognized the problem of juvenile misbehavior, while at the same time grappling with the problem of imposing legal consequences on problematic youths. In 1772 BCE the Babylonian Code of Hammurabi recognized that some crimes were particular to juveniles. Similar recognitions also existed in Jewish, Roman, and Islamic law, and by the fifth century CE, Roman law held that a child under the age of seven could not be held criminally liable. In the eleventh century, English common law recognized the principle of parens patriae (Latin for “parent of the nation”), which reflected the idea that the state could act as a substitute parent for a child in need of guidance or justice. However, it was only in 1899 that the city of Chicago, Illinois, created the first juvenile court to preside over cases in which a person under the age of sixteen had violated a state law or city ordinance.

“The children now love luxury; they have bad manners, contempt for authority …”

Socrates, quoted by Plato

A child is not a fully developed person, and the idea that a juvenile can form the requisite criminal intent to be held responsible for a crime has grown more complicated as humanity’s understanding of child development has increased. Juvenile courts understand the importance of justice in a society, while at the same time recognizing that children need guidance and cannot be held as culpable as adults.

As juveniles become more involved in violent crimes, the juvenile justice system has faced growing scrutiny. Juvenile murders account for a relatively small number of murders, but they attract greater media attention. The juvenile justice system therefore faces ongoing challenges. MT

1899

Conspicuous Consumption

Thorstein Veblen

The purchase of goods and services for the sake of publicly exhibiting wealth or status

The overt display of luxury goods and services by the ruling classes in the late nineteenth century led Thorstein Veblen to formulate his economic theory of conspicuous consumption.

In his influential book The Theory of the Leisure Class: An Economic Study in the Evolution of Institutions published in 1899, Thorstein Veblen (1857–1929) identified a distinctive feature of the newly established upper class of the late nineteenth century and the rising middle class of the twentieth century: their accumulation of luxury goods and services for the expressed purpose of displaying prestige, wealth, and social status. Veblen, a U.S. economist and social scientist, viewed this phenomenon as a negative symptom of the new rich that would inhibit social adaptation to the necessities of the industrial age.

“Conspicuous consumption of valuable goods is a means of reputability to the gentleman of leisure.”

Thorstein Veblen

The intention for exhibition embodied in the idea of conspicuous consumption is in contrast to the securing of goods and services for their intrinsic value or their originally established purpose. Focusing on the “conspicuous” aspect of the term, conspicuous consumption can conceivably occur among members of any socioeconomic class, from the richest to the poorest. Acquiring status indicators can happen in any social setting. Focusing on the “consumption” aspect of the term, conspicuous consumption relates to the purchase and display of goods beyond what is necessary, and applies primarily to the middle and upper classes, who then set patterns of social behavior and consumption that are imitated by others. In this respect, it is closely tied to consumerism.

One ramification of Veblen’s insights into conspicuous consumption relates to the idea of a “luxury tax.” Such a tax increases costs on goods and services that primarily serve as declarations of affluence, in order to raise revenue and redistribute wealth with little loss to consumers who purchase for the sake of status and not utility. It may also gradually reduce conspicuous consumption of such “positional goods,” or “Veblen goods,” which bear the namesake because demand for them increases as price increases. JD

1899

Stream of Consciousness

William James

The theory that human conscience is an unending stream of continuous thought

William James, who is generally accepted as the father of U.S. psychology, has influenced generations of thinkers with his masterwork The Principles of Psychology (1899).

Although the concept of the mind possessing a streaming consciousness can be found in early Buddhist texts, the first modern approach to the phenomenon was put forward by William James (1842–1910), one of the United States first recognized psychologists in his 1,200-word masterwork The Principles of Psychology in 1899. In this book James speaks of consciousness as being “unbroken” and states that there are no “gaps,” or as he liked to say no “intrusive alien substances,” that come along to distinguish or break up one period of consciousness from the next. For consciousness to be interrupted by gaps or intrusions, James thought, is like “expecting the eye to feel a gap of silence because it does not hear, or the ear to feel a gap of darkness because it does not see. So much,” he said, “for the gaps that are unfelt.”

“Like a bird’s life, the stream of consciousness seems to be made up of an alternation of flights and perchings …”

William James, The Principles of Psychology (1899)

Consciousness, rather than being “chopped up,” was likened instead by James to a river or stream, a process that is ever-flowing even in the event of a sudden interruption, such as an explosion or losing one’s footing and falling over. These sorts of things—a clap of thunder or the sound of a gunshot—are about as disconnected from our present thoughts as “a joint in bamboo is a break in the wood.” The thunder clap is as intrinsically a part of our continuing, unbroken consciousness as the joint is a part of the bamboo in which it grows. James believed that our cognitive experiences overlap one another and are linked by what he called “fringes,” subconscious tabs, which act as clasps that are necessary in binding our conscious thoughts together, and prevent us from living in a chaotic inner world of random, unrelated experiences.

James’s theory influenced literature and became a narrative device to depict the multitudinous thoughts and feelings that pass through an individual’s mind. James Joyce’s Ulysses (1922) is one of the best-known examples of the stream of consciousness technique. BS

1899

Dream Interpretation

Sigmund Freud

The process of assigning meaning to dreams in order to unlock the unconscious

Henry Rousseau’s painting The Dream (1910) is thought to portray the dream process.

Dreams—imagined episodes or series of events that come to people’s minds while they are asleep—have always defied interpretation. Ancient civilizations may have believed that dreams were prophetic: there are dreams that foretell the future in the Babylonian Epic of Gilgamesh (c. 2150 BCE), Homer’s Iliad (seventh century BCE), and throughout the Old Testament of the Bible. However, it is uncertain whether the authors of these works were writing what they believed or merely conforming to literary conventions. Other cultures believed that dreams were supernatural visitations, which sought to explain matters that seemed incomprehensible while awake. The currency of such ideas declined during the Common Era, as it became generally accepted that dreams were manifestations—often in distorted form—of matters that had preoccupied the dreamer before falling asleep.

In 1899, Sigmund Freud (1856–1939), the founder of psychoanalysis, published The Interpretation of Dreams, in which he claimed that dreams are expressions of feelings and wishes that are repressed during wakefulness. (The idea that Freud stated in this work that all such thoughts are about sex, although widely credited, has no basis in the text.) The psychologist Carl Jung (1875–1961) then proposed that dreams could either be what Freud described or else expressions of what the dreamers most dreaded: sorting out which dream was which was one of the keys to self-knowledge.

More recently, some scientists have proposed that the function of dreams is purely biological—brainwaves activated by chemical activity in the body—and that no meaning should be attached to the images or messages that they may seem to contain. Dreams are products of the imagination; so, too, are some if not all interpretations thereof. GL

1899

Duck-rabbit Illusion

Joseph Jastrow

An optical illusion used as a psychological test of human perception

The first appearance of the duck-rabbit illusion, a wood engraving in Fliegende Blätter magazine, 1892.

Is it a duck, or is it a rabbit? Sketched by an unknown artist and first published in the Münich-based weekly satirical magazine Fliegende Blätter (Flying Leaves) on October 23, 1892 (and in the U.S. Harper’s Weekly a month later), the duck-rabbit drawing was first used in 1899 to test how we perceive our environment by the U.S. psychologist Joseph Jastrow (1863–1944), who referred to it as a piece of “ingenious conceit.”

The Jastrow version of the duck-rabbit illusion is technically more of an ambiguity than the original, more a reversible or “bistable” figure than a true illusion. But whatever you call it, it is one of the best-known images devised for testing a person’s perceptive acumen, along with the Schroeder staircase, which simultaneously looks equally convincing the right way up and upside-down, and the Necker cube, which spontaneously reverses in perspective as it is viewed.

How does the duck-rabbit illusion work in terms of research? When scrutinized in a controlled situation, the eye tends to be led to the lines that compose the image, rather than to the image itself. It is the lines that we see and try to interpret, the image being the “lure.” The lines are then seen alternately as representing a duck and/or a rabbit. It is a psychological tool, used to measure how we make sense of our environment by organizing incoming sensory stimuli.

Can interpretation of the image be used as a measure of intelligence? Is our speed at recognizing both images, our ability to flip our own perception easily from duck to rabbit, and vice versa, a measure of our own creativity? Studies have shown that test participants who are able to see both interpretations quickly tend to be more creative than those who initially struggle to make sense of the image; once the latter group “see” the image, they find it difficult to hold to any one interpretation. The duck becomes the rabbit, and the rabbit, the duck. BS

1899

Consumerism

Thorstein Veblen

The economic view that the acquisition and consumption of goods is beneficial to society

Department stores such as Macy’s in New York, shown here in the 1950s, helped spread consumerism.

While the desire for goods and services beyond mere necessity is a prevalent reality throughout history, Thorstein Veblen’s (1857–1929) discussion in The Theory of the Leisure Class (1899) occurs in direct relation to the results of the Industrial Revolution. As automation and organization of labor increased productivity in the late nineteenth century, consumer goods became more available. Veblen saw an increasing demand for goods stemming from improved means and greater availability among the developing middle and upper classes in Europe, all of which led him to identify a societal trend toward consumption of goods as an end in itself.

Consumerism is closely tied to Veblen’s notion of “conspicuous consumption,” in which goods and services are acquired more as a display of wealth and status than for their utility. Consumerism goes further, proposing that it is good for members of a society to engage in continual expenditure and consumption, not merely to establish their class status but also to fuel the engines of the economy that contribute to consumer goods.

“Are these things really better than the things I already have?”

Chuck Palahniuk, novelist

The impact of consumerism on Western societies has led to the development of strong businesses and massive economies, but also to an increased reliance on credit and debt. Marketing and brand promotion became major factors in economic growth largely because of the spread of conspicuous consumption. A common criticism of consumerism is its potential to lead to a devaluing of simplicity, utility, and institutions traditionally seen as holding intrinsic worth, in exchange for the continued accumulation of material goods for temporary satisfaction and perceived social status. JD

1899

Hilbert’s Axioms

David Hilbert

A set of assumptions proposed as the foundation of geometry

In his Grundlagen der Geometrie (The Foundations of Geometry), published in 1899, David Hilbert (1862–1943) developed twenty axioms intended to more adequately express Euclidian geometry. Euclid’s original five axioms had long been viewed as incomplete and insufficiently rigorous. Hilbert’s system begins with six primitive, undefined elements: three terms (point, straight line, plane) and three relations (betweenness—relating points; containment—relating points and straight lines, points and planes, and straight lines and planes; and congruence—relating line segments and angles). Hilbert organizes his twenty axioms into five groups. The eight Axioms of Incidence refer to the occurrence and relation of points, lines, and planes in terms of “containment.” The four Axioms of Order discuss them primarily in light of the concept of “betweenness.” The six Axioms of Congruence refer to the equivalence relations of the basic terms, while the Axiom Parallels define the concept of a parallel line in Euclidian geometry. Finally, Hilbert concludes with two Axioms of Continuity, also known individually as “Archimedes’s Axiom” and “Linear Completeness.”

“ … a game played according to certain rules with meaningless marks on paper.”

David Hilbert on mathematics

Hilbert’s Axioms evidenced an appreciation for clarity, organization, and rigor that shaped mathematics in the twentieth century. Furthermore, they are indicative of what would become known as “Hilbert’s Program,” the attempt to formulate mathematics on a system of axioms that is provably consistent. Ultimately, Hilbert’s approach also prompted another monumental discovery: Gödel’s proof that no formal axiomatic system can substantiate its own completeness. JD

Early 20th Century

1900–1949

A detail from a Russian communist poster from 1921. The October Revolution of 1917, in which the Communist Party took power, was greatly influenced by Leninism.

After the first airplane flight in 1903, inventors’ drawing boards were filled with ideas associated with machines; in the world of physics, meanwhile, blackboards were covered with groundbreaking ideas about the workings of the universe, such as general relativity, the Big Bang theory, and nuclear fusion. War has frequently facilitated the adoption of novel ideas and the two world wars that occurred during this period were no exception, bringing with them numerous technological developments and medical advances. World War II also saw the devastating consequences of political ideas such as fascism and the Holocaust, which led to a global articulation of moral principles in the form of the Universal Declaration of Human Rights in 1948.

1900

The Uses of Laughter

Henri Bergson

A seminal work discussing the nature and value of laughter

An expressive portrait of laughter adorns a cover of French magazine Rions! (Let’s Laugh!, 1908).

In 1900 French philosopher Henri Bergson (1859–1941) wrote a book titled Laughter: An Essay on the Meaning of Comic. It was divided into three chapters: The Comic in General; The Comic Element in Situations and the Comic Element in Words; and The Comic in Character. Throughout the work, Bergson discusses the aspects of human life that set the stage for laughter and its social function.

One of the main features of Bergson’s approach to laughter is his examination of the “mechanical.” A key premise of his discussion is that human beings have uniquely entered into a habitual, predictable, and mechanical way of living that contrasts with élan vital—the vibrancy of living in its fundamental sense. For Bergson, laughter is a way of seeing the mechanical in human life. It acts as a reminder of human rigidity, and of our blindness to our own vanity and obtuseness, and serves to unite us in that recognition.

“The comic spirit has a logic of its own, even in its wildest eccentricities.”

Henri Bergson, Laughter (1900)

It is Bergson’s focus on the uses of laughter, especially in terms of its social function, that provided a unique influence on future studies. These functions include laughter as a kind of “release valve,” through which the rules and duties that help humans suppress more threatening urges can safely give way to expression of emotion in comedy and drama. In this sense, it is a response to artificiality in society. Furthermore, Bergson’s work reveals how laughter serves as a corrective opportunity, when we laugh at someone’s inability to adjust to the standards of society. In the end, these uses of laughter revolve around returning authentic life to stagnant or unengaged living. JD

c. 1900

Phenomenology

Edmund Husserl

Defining what constitutes consciousness and how it is used to interpret our world

The phenomenological approach to psychology, developed by Austrian mathematician and philosopher Edmund Husserl (1859–1938) in the early decades of the 1900s can be explained via the following example. Suppose a person sees the image of a dog. Phenomenology says that the act of looking upon the dog qualifies as a genuine experience, regardless of whether the “seeing” might take place in the context of a dream, or is otherwise somehow imagined. Phenomenologists have no interest in analyzing whether or not the experience was real, and do not concern themselves with the dog’s actual existence or nonexistence, rather focusing only on the subject believing that they had seen it. The experience is seen as the qualia, a Latin term that refers to primary conscious experiences that cannot be misinterpreted, such as a headache. Phenomenologists seek to identify phenomena through the prism of people’s perceptions, free of hypothesis and preconception.

“Experiencing is consciousness that intuits something and values it to be actual.”

Edmund Husserl

In his books Logical Investigations (1900) and Ideas (1913), Husserl elaborated on concepts that eventually led him to develop this new branch of psychology. He stressed that to properly study a person’s consciousness it would be necessary to distinguish between the conscious act and the object to which the act is directed. He stressed the importance of intentionality, the process by which we direct our experience to the things that are in the world. This is the core of Husserlian phenomenology: that our experiences are directed toward things only via our thoughts, ideas, and concepts. JMa

1900

Russell’s Paradox

Ernst Zermelo

A logical paradox that pointed to a contradiction in set theory, demonstrating a fundamental limitation of such a system

Ernst Zermelo, photographed in 1900, whose work on axiomatic set theory had overwhelming implications on the foundations of mathematics.

In 1901 the British philosopher and logician Bertrand Russell (1872–1970) published an inconsistency within mathematical set theory that came to be called Russell’s Paradox, and became the first to attempt a solution. However, in 1900 it was the German logician and mathematician Ernst Zermelo (1871–1953) who independently was the first to recognize it, although he did not publish the idea and it remained known only to his fellow academics at the University of Göttingen.

“The paradox raises the frightening prospect that the whole of mathematics is based on shaky foundations, and that no proof can be trusted.”

Helen Joyce, mathematician

Russell’s Paradox is a contradiction within set theory, a set being either numbers or physical objects. For example, a set containing the numbers 5, 6, and 7 would be written (5,6,7). Logically, a set is capable of containing itself, but think of what we will call Set A—it has within it all those sets that do not contain themselves. Can A contain itself? If we say “yes” we hit a contradiction because, as we have just said, every set in A cannot contain itself. Yet claiming it does not contain itself does not work either.

A popular, “common” interpretation, leaving aside complex theorems such as the one above, goes as follows: “There is a barber in a small village and all the men in that village either shaved themselves or were shaved by the barber. The barber, however, only shaved those who did not shave themselves. So … did the barber shave himself?” If the barber shaved himself, the statement cannot be true because he only shaves men who do not shave themselves. But if he did not shave himself then he has to go to the barber (himself) to get shaved.

What Zermelo and Russell had accomplished was to throw doubt on the then growing idea that mathematics was reducible to pure logic. How could mathematical proofs possibly be trusted, with the set theories that underlie so much of mathematics now appearing to be incomplete and, worse, contradictory? JMa

1900

Planck’s Law

Max Planck

An equation that sought to measure the amount of radiation emitted by “blackbodies”

First presented by the German physicist Max Planck (1858–1947) in a lecture to a meeting of the German Physical Society in October 1900, Planck’s Law remains one of the great cornerstones of thermodynamics, created to calculate the intensity of radiation emitted in a fixed direction from a so-called “blackbody” (an object that absorbs all of the electromagnetic energy that falls upon it), and how that intensity can vary according to the body’s temperature.

Planck had been working on what he called his “blackbody radiation problem” for years, but his equation would have been impossible to achieve were it not for the work of his fellow physicists Otto Lummer (1860–1925) and Ernst Pringsheim (1859–1917) and their experiments in infrared bolometry (the measuring of radiant energy) at Berlin’s Imperial Physico-Technical Institute in the 1890s.

At the end of the nineteenth century most physicists sought to gain an understanding of blackbody radiation by heating a hollow object, drilling a small hole in its side, and then measuring emitted radiation levels. Planck chose not to measure radiation levels directly but instead calculated the average release and distribution of entropy (the amount of energy not available for “work”).

Planck’s Law has held together as a theory ever since, although in recent times there seem to be some random exceptions in the world of the very small. In 2009 researchers at the University of California in Los Angeles found carbon nanotubes just 100 atoms in width refusing to emit the quantities of radiation that Planck suggested they should. Then, in September 2012, researchers at the Vienna University of Technology observed the heating and cooling of silica fibers followed the principles of more general rules, rather than those of Planck’s mostly immutable equation. JMa

c. 1900

Structuralism

Wilhelm Wundt

The search for insights into our perceptions and cognitive processes

Wilhelm Wundt (1832–1920) taught the first courses in physiological psychology in Heidelberg, Germany, in 1867. He wrote the world’s first psychology textbook, Principles of Physiological Psychology (1873–74) and established the world’s first “psychological laboratory,” the Institute for Experimental Psychology, at Leipzig in 1879. He was also the first person to be given the title “psychologist” and defined psychology’s first paradigm, which was later given the name “structuralism.”

Structuralism was an attempt to break down mental processes into their most basic constituent parts, to study them as a chemist would study chemical compounds, and grew out of Wundt’s attempt to introspectively study the unconscious mind by studying the sensory perceptions of patients through the manipulation of external stimuli. It was a search for an objective, rational method of examining our perceptions while taking into account the crisis and rapidly evolving societal forces in late twentieth-century Europe. This new form of analysis cast a wide net, with the quest for a greater understanding also taking in the field of linguistics and the work of Ferdinand de Saussure (1857–1913).

“Wundt believed that consciousness could be broken down to its basic elements.”

Saul McLeod, Simply Psychology (2007)

Psychologist William James criticized structuralism at a time when various disciplines were competing for dominance in the new science of psychology, claiming it had “plenty of school but no thought.” Structuralism declined in popularity in the late 1920s with the death of Wundt’s most devoted former pupil, Edward Titchener, and it was replaced by humanism, behaviorism. and psychoanalysis. BS

1901

Pacifism

Émile Arnaud

The absolute rejection of war and violence in any form

Women of the National League for Limitations of Armament demonstrating in Washington, D.C. in 1922.

Pacifism is the belief that killing cannot have any justification, and that there can be no such thing as a “just war.” The term was first used by French lawyer and founder of the Ligue Internationale de la Paix et de la Liberté (International League for Peace and Freedom) Émile Arnaud (1864–1921), who codified his beliefs in Code de la Paix in 1901. Subsequently, the idea of pacifism became influential, characterizing peace activists and movements during the twentieth century.

The idea of pacifism is formulated as a protest against pragmatic arguments justifying war. Even self-defense, a common justification for the use of violence, is unacceptable to the absolute stance against violence taken by a pacifist. Pacifism advances the idea that violence is the product of a “cycle of violence,” and that using violence to respond to violence will produce only more violence. Accordingly, pacifism proposes that only radical resistance to any form of violence will break this cycle. Historically, pacifism has its roots in the teachings of notable religious figures, including Buddha and Jesus.

“There are many causes I would die for. There is not a single cause I would kill for.”

Mahatma Gandhi, civil rights activist

As a moral strategy, pacifism proposes to avoid the guilt associated with committing acts of violence in the name of violence. Pacifist movements have also proven to be a successful political strategy, since pacifism has been a notable element of some movements for social change in the twentieth century. Mahatma Ghandi successfully used pacifist strategies to secure Indian independence, and Martin Luther King, Jr. advanced a pacifist version of civil disobedience in the civil rights movement in the United States. TD

1901

Laparoscopic Surgery

Georg Kelling

The development of a minimally invasive form of surgery

Laparoscopic surgery, also known as keyhole surgery, is performed by entering the skin via small incisions or a body cavity. The first laparoscopic procedure was conducted in 1901 by Georg Kelling (1866–1945), who carried out what he referred to as a “celioscopy” on a dog. Kelling’s method involved inflating the abdominal cavity of the animal and then inserting a thin cytoscope (an optical device used in diagnostics of the bladder) through the abdominal wall to enable examination of the area without damaging any of the internal organs. The first laparoscopic procedure on humans—which used a similar methodology—was performed by Hans Christian Jacobaeus (1879–1937) in 1910.

Although laparoscopic surgical techniques were gradually refined over the decades that followed, laparoscopy did not become fully popularized until the advent in the late 1980s of a computer chip television camera that could be attached to the laparoscope (a fiber-optic viewing tube), enabling the surgeon to closely monitor and view the procedure. Originally, the medical community held many reservations about the safety of laparoscopic procedures. However, the reduced risk to patients due to hemorrhaging caused them to grow in popularity, particularly in gynecology for outpatient procedures such as tubal ligation.

“The method is based on the fact that … the abdominal wall is extremely flexible …”

Georg Kelling

Laparoscopic surgery has revolutionized surgery, and it is now common for laparoscopic techniques to be used for numerous procedures. Because laparoscopic surgery is less invasive, patients often experience less pain, spend less time in hospital, and have a lower risk of complications due to surgery. NM

1901

Pavlov’s Dog

Ivan Pavlov

A study of conditioning in dogs that formed the basis of behavioral psychology

Pavlov’s Dog refers to the well-known experiments conducted by Russian physiologist Ivan Pavlov (1849–1936) from 1901. Pavlov discovered that dogs could be conditioned to salivate when a bell rang. This kind of learning, now known as “classical conditioning,” has had profound effects on the world of psychology, specifically in behaviorism. Pavlov’s work demonstrated that an animal could learn to behave unintentionally in a particular way. The dog’s salivation in relation to the bell is an unintended response, but it is nevertheless a learned response. Psychologists studying behaviorism have continued to discover that humans, too, can be conditioned to respond to stimuli, without even realizing it.

Pavlov’s experiments began with attempts to learn about the physiology of dogs and how they salivate. He and his assistants would bring in meat or meat powder to the dogs and observe their salivation behaviors. These experiments had nothing, initially, to do with conditioning. However, Pavlov and his assistant, Ivan Filippovitch Tolochinov, noticed that the dogs they were observing would begin to salivate as soon as someone in a white lab coat entered the room. At first this was simply an annoyance but Pavlov realized that the dogs had learned, unintentionally, to associate the scientists with food and would salivate even when there was no food present. Pavlov observed that an initially neutral stimulus, such as a bell or a white coat, could become associated with an unconditioned stimulus such as food, and after a period of experiencing both together the formerly neutral stimulus would cause a conditioned response, such as salivation.

Pavlov’s theory is now used in areas such as marketing, politics, and education. Any field that seeks to cause a change in behavior is one that uses classical conditioning. NM

1902

Organ Transplantation

Alexis Carrel

The development of a technique to enable organs to be transplanted

Doctors had dreamed for millennia about the possibility of transplanting organs from one body to another as a means by which to save lives. The surgical procedure, which involves either taking an organ from a donor, who may be living or dead, or from the patient’s own body, has only recently become a viable medical procedure. Through the use of drugs that help suppress the body’s natural tendency to reject foreign organs, doctors can implant donor organs or tissue into a patient who needs the organ for improved quality of life or survival.

French surgeon Alexis Carrel (1873–1944) carried out pioneering work in organ transplantation and successfully transplanted different organs in dogs from 1902. His success depended on his development of a method of suturing blood vessels, a technique crucial to transplant surgery. Through his work, he was also one of the first to identify the problem of organ rejection. Carrel, with Charles A. Lindberg (1902–74), invented the perfusion pump, which was an essential step toward making organ transplantation possible. The first truly successful organ transplant, however, was performed by Joseph Murray and J. Hartwell Harrison: a kidney transplant between identical twins, in 1954. Today with drugs such as cyclosporine, which suppress the immune rejection response, organ transplantation is often successful.

Currently the only organs that can be transplanted are the heart, kidneys, liver, lungs, pancreas, intestine, and thymus. Organ transplantation is a field that is rapidly improving in technique and possibility. Stem cell research is increasing the possibility of using a donor’s own cells to grow new organs for transplantation. For the moment, however, the consistently improving viability of organ transplantation has resulted in a steadily increasing demand for organs. Unfortunately, there is also a shortage of donors and people die every day waiting to receive an organ transplant. NM

1902

The Correspondence Theory of Truth

G. E. Moore

A philosophical way of understanding the nature of truth and falsehood—the “truth” is whatever corresponds to reality

Together with Bertrand Russell, G. E. Moore led the move away from idealism in British philosophy and founded the analytic tradition.

The correspondence theory of truth holds that truth consists of a relation to a mind-independent reality. While the theory has a long history, its modern origins can be traced to two essays by British philosophers: “Truth and Falsity” by G. E. Moore (1873–1958) in 1902 and “On the Nature of Truth” by Bertrand Russell (1872–1970) in 1906. The pair were friends and influenced each other’s thinking.

“The view that truth is the quality of belief in facts … is a form of the correspondence theory, i.e., of the theory that truth means the correspondence of our ideas with reality.”

Bertrand Russell, “On the Nature of Truth” (1906)

The idea that truth consists in agreement with reality is an intuitive one, and precursors to the correspondence theory can be found in thinkers as diverse as Aristotle, St. Thomas Aquinas, and John Stuart Mill. It is often associated with metaphysical realism, which holds that the fundamental features of reality are independent of human perception. Russell and Moore’s influential version of the correspondence theory proposed that a belief was true if it corresponded to a fact, and false otherwise. This was in explicit contrast to the theories of truth proposed by some idealist philosophers of the day, who had argued that “truth” was a matter of human experience fitting together in the right way, and that all actual beliefs were only “partially true” or “partially false.” In later years, Russell defended the closely related thesis of logical atomism, which held that the external world itself was constituted by the discrete, atomic facts of the type that made beliefs true or false.

Moore and Russell’s explicit formulation and defense of the correspondence theory sparked an interest in truth across a number of disciplines, including philosophy, logic, and linguistics. Many contemporary scholars still endorse versions of the correspondence theory, although they disagree about the nature of truth bearers (are they sentences, beliefs, or something else?) and the particular relation to the world that makes these truth bearers true or false. BSh

1902

Leninism

Vladimir Lenin

The advocation of a Marxist revolution led by a “vanguard” party

Leninism, named after its originator Vladimir Lenin (1870–1924), is a political theory emerging from Marxism that advocates the creation of a socialist state. Leninism proscribes revolution, specifically revolution by a vanguard party (a group of revolutionaries that goes first) to help educate and lead the proletariat (working class) in achieving social and economic freedom from the bourgeoisie (upper class). The end goal of Leninism is the establishment of a direct-democracy rule by the proletariat. Intended specifically as a rejection of the capitalistic practices of the Russian Empire, Leninism was initially a practical theory of revolution, rather than a philosophy, comprised of the “how tos” of revolution. Lenin put forward his ideas in the political pamphlet What Is to Be Done?, which was published in 1902.

The term “Leninism” was not coined until two years before Lenin’s death. His theory emerged from his attempts to bring Marxist philosophy to Russia by overthrowing the existing government. Lenin’s success during the October Revolution of 1917 enabled him to establish a “Dictatorship of the Proletariat.” Dictatorship, in this case, meant democratic rule by the working class. As a Marxist, Lenin believed that the working class was repressed, and he advocated a shift of power from the wealthy to the workers who provide the wealth.

Leninism has had profound implications and remains controversial. It led to Lenin’s establishment of the Russian Socialist Federative Soviet Republic in 1917, which eventually absorbed numerous surrounding countries and became the Union of Soviet Socialist Republics (USSR). The rule of Joseph Stalin, after Lenin, was marked by the Great Purge (1934–38) and the execution of hundreds of thousands of innocents. However, Leninism remains among its advocates the best means by which to educate and empower the people to rise up against repression. NM

1903

Constructivism

John Dewey

A theory of learning that encourages human inquisitiveness and curiosity

Constructivism is a theory of learning with roots embedded in the world of psychology. It tells us we are “constructors of information,” that our desire to learn is an active thing—a constructive, logical process that grows as a result of our own experiences and subsequent reflections as we build up our subjective comprehension of the world around us.

The person generally credited with identifying and codifying this approach to learning is the U.S. philosopher and educational reformer John Dewey (1859–1952), who set out his views in a series of essays titled “Thought and its Subject-Matter,” published in 1903 in Studies in Logical Theory. As an educator, Dewey created an atmosphere of “experiential learning,” the act of deriving meaning and knowledge directly from real-life experiences, encouraging his students to think, reflect, explore, and question for themselves in a free and open environment.

“Education is growth; education is not preparation for life but is life itself.”

John Dewey

By the late 1930s, Dewey’s approach had started to develop into two schools of thought: social constructivism, the product of Russian psychologist Lev Vygotsky (1896–1934), who emphasized the social and cultural contexts in learning; and cognitive constructivism, the “four stages of learning principle,” pioneered by the Swiss psychologist Jean Piaget (1896–1980). Constructivism replaced other “information-processing” approaches to learning that had failed to appreciate the role of the learner, who was considered a mere receptacle of “hardwired” facts, and by the 1980s it had emerged triumphant as the leading theory on how humans learn. BS

1903

Nudism

Paul Zimmermann

A way of life, in harmony with nature, expressed through social nudity

Cycling is a popular recreational outdoor activity for nudists.

It is difficult to pinpoint the origins of nudism, or naturism, as it is often called. However, the first official nudist club, Freilichtpark (Free Light Park), was opened by Paul Zimmermann in an area of secluded woodland north of Hamburg, Germany, in 1903. It attracted visitors from around the world, and nudism gained a large following in Europe, particularly in Germany. The International Naturist Federation was formed in France in 1953 to further the cause.

Nudism was first spoken of as a means of improving physical health and lifestyle in the late 1700s, although its precise definition varies widely both historically and from country to country. Nudism encompasses an individual preferring to be naked in his or her own home, or the militant nudism of public nudity activists. There is also social nudity: nudists who meet together at special social events, nudist resorts or beaches, and “nudist colonies,” although this term is now somewhat outdated and not used by nudists due to various negative connotations. Cruise lines also offer naturist cruises.

Although not as popular as they once were, there are still a number of events in which nudists can participate openly on the world stage. The International Naturist Federation has designated the first Sunday in June the World Day of Naturism. Every year in May, everyone—nudists and non-nudists alike—is asked to participate in the World Naked Gardening Day. The World Naked Bike Ride is a clothing-optional event that began in 2004 (although many other random naked cycle rides had occurred prior to that). What began as twenty-eight rides in twenty-eight cities across the world has since mushroomed to more than eighty cities in almost twenty different countries. Arguments over the pros and cons of public nudity abound, yet surprisingly very little evidence has been forthcoming to demonstrate that the practice has any negative effect on the moral fabric of society. BS

1904

Protestant Work Ethic

Max Weber

A concept advocating the importance of work for its own sake

City workers pour out of London Bridge Station, London, in February 1962.

As a theological, sociological, economic, and historical concept the Protestant work ethic emphasizes the importance of working not only for the acquisition of material goods, but also for its own sake. It emphasizes the idea that hard work produces its own reward. As a religiously based notion, it was, initially, grounded in the idea that being frugal and working hard acted as evidence for one’s Christian salvation. All hard work, regardless of whether or not it is in an ordained profession, such as being a priest, is considered godly, while sloth and idleness are considered sinful.

German sociologist and political economist Max Weber (1864–1920) developed his Protestant ethic idea in two journal articles, published in 1904–05, which were later combined in his book The Protestant Ethic and the Spirit of Capitalism. Weber argued that the advent of capitalism is in large part due to the freedom of work provided by the Protestant work ethic. The ethic itself was the result of combining the spiritual teachings of German theologian Martin Luther (1483–1546) with the philosophies of British theologian John Calvin (1509–64). Luther believed that a man’s work was his spiritual calling, and Calvin argued that all men must work, even the rich, because idleness itself was sinful. All work became good work as long as it was diligent and not indulgent in sin. As a result, it was godly to receive reward and pay for that hard work, as the work itself was sanctified by virtue of being hard work.

It is argued by many philosophers, economists, and social theorists that the Protestant work ethic not only defined and caused the spread of capitalism, but also that the economic success of the United States is a direct result of the Protestant work ethic adopted by the early Puritan settlers. Work ceased to be something the people did in order to survive, and workers began to produce as much as possible, regardless of surplus. NM

1904

Poincaré Conjecture

Henri Poincaré

A theorem about the characterization of the three-dimensional sphere

Russian mathematician Grigori Perelman was awarded, but declined, the Fields Medal—often described as the Nobel prize of mathematics—for his proof of Poincaré’s Conjecture in 2006.

In 1904 the French mathematician Henri Poincaré (1854–1912) posed what has since become one of the world’s great mathematical dilemmas, one that would take more than one hundred years to be answered. It was so confounding that it became one of the Millennium Prize Problems, one of a select grouping of hypotheses and theories that the Clay Mathematics Institute in the U.S. state of Rhode Island offered a one million dollar prize to anyone who could solve it. The question: Is there a test for recognizing when a shape is a three-dimensional sphere by performing measurements inside the sphere?

“Fifty years ago I was working on Poincaré’s Conjecture and thus hold a longstanding appreciation for this beautiful and difficult problem. The final solution by Grigori Perelman is a great event in the history of mathematics.”

Steven Smale, mathematician and Fields medallist

In order to help understand the dilemma, imagine an orange and a rubber band. If the rubber band is stretched around the orange, it is possible to “shrink” it down to a single point on the orange by continually moving it in on itself, without it coming away from the surface. The orange’s surface is, according to Poincaré, “simply connected” and allows for the contraction of the rubber band. Yet, if we try the same thing with a donut, and we try to place a rubber band around it and then try and reduce it, it is impossible to achieve—either the donut or the rubber band will break in the attempt. Poincaré called the orange, or anything that is three-dimensional and lacks boundaries, a closed three-manifold. Poincaré knew that simple connectivity was enough to define a sphere, but wondered if it would be enough to define spheres in higher dimensions.

Poincaré’s question was answered in 2003 by the Russian mathematician Grigori Perelman (b. 1966) using the Ricci flow, a geometric flow in differential geometry. Perelman refused the one million dollar prize money and the Fields Medal, mathematics’ highest honor, in 2006. Although he was still listed as a researcher at St. Petersburg University, Perelman had apparently left academia and abandoned mathematics. JMa

1904

Biblical Socio-scientific Criticism

Max Weber

Analysis of the Bible using social-scientific methodology

Biblical socio-scientific criticism examines the social and cultural dimensions of scripture, including descriptions of actual geographical locations, such as the Garden of Gethsemane.

Biblical socio-scientific criticism (also known as social-scientific criticism, socio-historical criticism, and social-world criticism) is a multidisciplinary area of biblical study, which draws on the social sciences, in particular cultural anthropology and sociology. It uses social-scientific methods to analyze empirical data and the communities behind biblical documents in order to examine the social and cultural dimensions in which the Bible texts were produced. Socio-scientific criticism aims to enable the understanding of the writers and their purpose, as well as their audience. For example, biblical socio-scientific criticism takes into account social factors in the early Palestinian society in its examination of the origins of Christianity.

“With the salvation doctrine of Christianity as its core, the Pauline mission, in achieving emancipation from the self-created ghetto, found a linkage to a Jewish … doctrine derived from the religious experience of the exiled people.”

Max Weber, Ancient Judaism (1917–19)

Biblical socio-scientific criticism has its roots in the nineteenth century when the German sociologist, philosopher, and political economist Max Weber (1864–1920) wrote two journal articles, published in 1904–05, that later became the founding text of sociology, The Protestant Ethic and the Spirit of Capitalism. Weber wrote several books on religion, including Ancient Judaism (1917–19), in which he asserted that the apostles did not convert many Jews to Christianity because they could not break the well-structured communities of the Jews.

German theologian Ernst Troeltsch (1865–1923) expanded sociological examination of the Bible in The Social Teaching of the Christian Churches (1911), in which he theorized about the institutionalization of groups and how sects emerged. Biblical socio-scientific criticism emerged in its modern form in the 1970s when U.S. theologian Robin Scroggs (1930–2005) published The Earliest Christian Communities as a Sectarian Movement (1975) and The Sociological Interpretation of the New Testament (1977), which developed key themes in the social context of early Christianity. By the 1980s, biblical socio-scientific criticism had become a mainstream form of analysis of Judeo-Christian scripture. CK

1905

Special Relativity

Albert Einstein

A physical theory of measurement that explains motion in the universe

Einstein photographed in 1905, the year that his theory of special relativity was conceived.

Special relativity is a physical theory of measurement developed by Albert Einstein (1879–1955) in 1905. He realized that the speed of light was so constant that the motion of everything, including time, was relative to and warped around it. The theory tells us two things: 1) The laws of physics hold true for all frames of reference; and 2) The speed of light is measured as constant in all frames of reference. What is profound is Einstein’s realization that no matter a person’s location or how fast he or she is moving, the speed of light remains constant, regardless of the frame of reference.

Special relativity requires the acceptance of a profoundly counterintuitive idea. Usually, we think of speed as relative to a frame of reference. For example, if I am traveling on a train at 10 miles per hour (16 kph) and I throw a ball at 5 miles per hour (8 kph) in the direction I am moving, the ball, relative to me, will travel at 5 miles per hour. However, to a person on the ground observing the train and the ball, the ball will be moving at the train’s speed plus the speed at which I threw it—15 miles per hour (24 kph). The speed of light, however, does not work in this way. No matter the frame of reference, it moves at the same speed. If I used a flashlight instead of a ball on the same train, the light would travel at the same speed from both my perspective and that of the observer on the ground.

“The whole of science is nothing more than a refinement of everyday thinking.”

Albert Einstein

Einstein’s theory has influenced everything from the way we measure the speed of objects out in space to the use of global positioning systems (GPS) for cell phones. The world changed how it understood the laws of physics because of special relativity. NM

1905

The Theory of Descriptions

Bertrand Russell

A philosophical theory about how we make meaning in language

The theory of descriptions, developed by Bertrand Russell (1872–1970) in his paper “On Denoting” in 1905, helps us to understand what makes a statement true or false. It explains that definite descriptions (such as “the dog is brown”) are not sentences that necessarily refer to an object. The theory explains that definite descriptions have a logical structure. That structure can be symbolized, therefore, and we can figure out how they make meaning, which for logicians means we can say the sentence is true or false, as opposed to just nonsense.

The problem that Russell was trying to fix was a problem of “meaning.” Before his idea, we assumed that definite descriptions refer to something, or else they are meaningless. Philosophers argued that statements such as “The current king of France is bald” are meaningless because they refer to nothing—there is no king of France—therefore we cannot say they are true or false. Yet the sentence does seem to communicate something meaningful. Russell argued that we should consider the sentence like an “if-then” statement: There is a person who, if he is the king of France, then he is bald. There is no king of France, so we can say, “No, there is no person who meets that qualifier,” and so the sentence, “The present king of France is bald,” is false.

“The world is full of magical things patiently waiting for our wits to grow sharper.”

Bertrand Russell

Today philosophers, linguists, and mathematicians still argue about Russell’s theory, because, although it solves some problems, it creates new ones. If we say that “The present king of France is bald” is false, it sounds like we are saying that there is a king of France. How we answer that question structures everything else we know about language, thought, and truth. NM

1905

Permanent Revolution

Leon Trotsky

How socialist revolutions could occur in societies that were less advanced

Although the phrase “permanent revolution” was first used by Karl Marx (1818–83) in his book The Holy Family (1844), decades passed before it eventually became linked to the philosophy of the Communist revolutionary theorist Leon Trotsky (1879–1940), who believed there should be a proletarian revolution occurring somewhere in the world all the time. Trotsky formed this idea in a series of essays in 1905 that he later published in his book Results and Prospects (1906). He paid no acknowledgment to Marx and Friedrich Engels for inventing and promulgating the term that had been used sparingly at best in the intervening decades.

In the wake of the Russian Revolution in 1917, Trotsky saw the need to export the Bolshevik revolutionary model abroad rather than focusing solely on perfecting the revolution inside Russia’s own borders, the stated aim of Joseph Stalin’s “Socialism in One Country” approach. Trotsky considered the success of revolutions elsewhere vital to the success of the new Russia, because a hostile, rampant capitalist world could one day threaten the stability even of their ideal worker’s state.

“ … a permanent revolution in a newer and broader sense of the word.”

Leon Trotsky

Trotsky wanted to send representatives abroad to assist other revolutionary movements in what he called “backward countries,” a policy he considered Russia’s ideological responsibility. He believed that workers in colonial and semi-colonial nations needed to achieve emancipation and to establish their own proletarian dictatorships, but felt them to be incapable of bringing any such revolt to a successful conclusion. Trotsky argued for a permanent revolution, ever continuing and expanding until transforming every nation on Earth. BS

1906

Tantric Sex

Pierre Arnold Bernard

Increasing sexual pleasure through (misinterpreting) ancient Eastern texts

A painting from Nepal shows the Buddhist guru Padmasambhava in a tantric pose.

Tantric sex is a form of physical, mental, and spiritual discipline aimed at increasing sexual arousal, prolonging coitus, and producing multiple or sustained orgasms. Although the adjective is intended to suggest a connection with Indian religions, scholars suggest that tantric sex as it is marketed in the West is a misinterpretation of tantrism. A central figure in fomenting the misinterpretation was Pierre Arnold Bernard (1875–1955)—nicknamed “The Omnipotent Oom”—who founded the Tantrik Order in 1906.

In their native context, tantras are esoteric religious texts of certain sects of Hinduism and Buddhism, focusing on ways of appropriating and channeling the energy of the divine, including visualization, reciting mantras, and yoga. Sex is not a major component. However, it was often taken as such by Westerners, who tended to regard tantrism as reflecting the depravity of a decadent East. Bernard’s contribution was to operate a series of studios in the United States offering yoga to the public and secret tantric rites—reportedly involving prolonged coitus—to the initiated. Although his operation was derailed by scandal, his interpretation of tantric sex was influential. With the sexual revolution of the 1960s and the New Age movement of the 1970s, tantric sex was particularly in the public eye. In 1996, Nik Douglas revived Bernard’s Tantrik Order as “The New Tantric Order in America.”

“ … the use of the term ‘tantric’ [in ‘tantric sex’] … is entirely misplaced.”

David Gordon White, Kiss of the Yogini (2006)

While the authenticity of tantric sex is dubious, it would be ungenerous to begrudge its devotees the pleasure they find in it. The association of sex with the mysterious East is likely to continue to be appealing. GB

1906

Satyagraha

Mahatma Gandhi

Social or political reform through nonviolent protest and a conversion to truth

Gandhi in South Africa in c. 1903. He developed his philosophy of satyagraha while working there as a lawyer.

Satyagraha can be translated as “insistence on truth.” A nonviolent philosophy of social change, satyagraha requires that the oppressed seek the conversion of the oppressor to the truth. Similar to “passive resistance,” those who follow a path of satyagraha do not use violence as a means by which to stop violence. However, it can be argued that it is different to passive resistance in that its followers seek a whole acceptance of the truth, particularly of the wrong that has been committed. Followers of satyagraha refuse to submit to the evils and repression that others would do to them, firmly loving the oppressor while requiring that the oppressor acknowledge and stop the violence and evil.

Satyagraha was a philosophy developed by Mahatma Gandhi (1869–1948) in 1906 while he was struggling for Indian rights as a lawyer in Johannesburg, South Africa, and that he later used in the Indian independence movement. Gandhi argued that satyagraha was a weapon of the strong that allows for no violence and always insists upon the truth. Unlike many other forms of civil disobedience, success in satyagraha is not only defined as a change in social organization or stigma, but also as a conversion of the oppressor to the truth (that the harm he or she is doing is wrong). For this reason, critics argue that Gandhi’s efforts have been profoundly unsuccessful.

“Victory attained by violence is tantamount to a defeat, for it is momentary.”

Mahatma Gandhi

Satyagraha has had a profound impact on human struggle, however, including on Martin Luther King, Jr.’s philosophy of love and resistance. The U.S. civil rights movement was in many ways the successful result of and an example of satyagraha in practice. NM

1906

Coherence Theory of Truth

H. H. Joachim and F. H. Bradley

Theory that truth depends on coherence with a set of propositions or beliefs

This theory holds that truth consists of coherence with a specified set of propositions or beliefs. Suggestions of the view can be found in the work of thinkers such as Baruch Spinoza and Immanuel Kant. However, the first in-depth defenses of the view are provided by two British idealist philosophers: H. H. Joachim (1868–1938), who in 1906 wrote The Nature of Truth: An Essay, and F. H. Bradley (1846–1924), whose essay “On Coherence and Truth” was published in 1909.

The coherence theory is a competitor of the correspondence theory of truth, which holds that truth consists of agreement with the external world. Many of the original proponents of the coherence theory of truth, including Joachim and Bradley, were idealists who believed that there was no such thing as a mind-independent external world, and that everything was in some sense “mental.” The coherence theory fits naturally with this view, since it identifies true beliefs with those that “fit” together within the integrated, systematic whole. The theory was also defended by the logical positivist Otto Neurath, who argued that it fit with the scientific practice of evaluating new hypotheses and evidence by seeing how they “cohere with” previously accepted theories.

“We are like sailors who have to rebuild their ship on the open sea …”

Otto Neurath, “Protocol Sentences” (1932)

While coherence theories of truth still have prominent defenders, they are not as widely accepted as they once were. Possible reasons for this may be the widespread rejection of idealism by academic philosophers and that contemporary anti-realists often adopt deflationary theories of truth, which claim that there is no deep “nature” of truth. BSh

1906

Pareto Efficiency

Vilfredo Pareto

An economic model in which no one can be better off without making another worse off

Pareto efficiency (also known as Pareto optimality) is a hypothetical state of affairs in economic theory in which no one can be made better off without making at least one individual worse off. The concept is taken from the work of the Italian economist and sociologist Vilfredo Pareto (1848–1923), who used the idea in his studies of economic efficiency and income distribution. An allocation of resources is Pareto efficient if no other allocation of resources exists in which someone is better off and everyone is at least as well off. Pareto developed the concept in his Manual of Political Economy, which was first published in 1906.

Загрузка...