The idea of a mantra is important for understanding the way that a person’s mind can be intentionally and completely focused on a certain task. Mantras are particularly useful in religious practices that strive to push the self beyond its own consciousness. Outside of religious traditions, the term “mantra” has come to refer to any phrase that is commonly repeated, typically one that contains an essential truth or guiding principle. TD
c. 1500 BCE
Zoroastrianism
Zoroaster (Zarathustra)
An ancient Persian religion teaching cosmic dualism
The remains of a Zoroastrian Fire Temple in Yazd, Iran. Fire is held sacred in Zoroastrianism.
It is not known precisely when the ancient Persian religion of Zoroastrianism first came into existence, although scholars generally agree that it first appeared in the late Bronze Age (1500–800 BCE) as a development of Persian polytheism. The sacredness of the bull, for example, entered into Zoroastrianism (and Hinduism also), as did the strong insistence on ritual purification and the holiness of fire (for this reason Zoroastrians never burn their dead, but rather dispose of corpses by “exposing” them to the birds).
The ancient Persians and Zoroastrians alike also revered asha (in Hinduism rta), a term that is best understood as truth or universal law, especially moral law. Asha was itself upheld by three ahuras, or “good deities”: Varuna, Mithra, and, the most supreme of all, Mazda, the lord of wisdom.
Claiming to be a true prophet of Mazda, Zoroaster, also known as Zarathustra, whose time of life is disputed, taught a form of cosmic dualism, namely, that there are two supreme, morally opposed gods—Ahura Mazda (good) and Angra Mainyu (bad). Zoroaster believed that Ahura Mazda and his ahuras were waging a great war against Angra Mainyu and his devas (evil deities), and that humankind, caught in the middle of this war, should choose to ally himself with Ahura Mazda, not only because Mazda is good but also because Mazda will be ultimately victorious.
Zoroastrianism’s influence lies firstly in itself, which is to say that it is one of the oldest living religions. Beyond this, Zoroastrianism gave Islam its format of five prayers a day, Tibetan Buddhism its practice of corpse exposure, Mahayana Buddhism its world savior concept (Saoshyant, or Maitreya), and Gnosticism and Manichaeism their belief that the world was made by an evil spirit. Zoroastrianism’s apparent influence on Jewish and Christian eschatology, however, has proved difficult to substantiate. AB
c. 1500 BCE
Angels
Ancient Israel
Conditionally immortal, supernatural spirits existing partway between God and man
A panel from St. Matthew and the Angel (1602) by Michelangelo Merisi da Caravaggio.
An eternal being is one that has no beginning and no end, and in the history of religion only the monotheistic God and the pantheistic Deity can truly be called eternal. However, many religions talk about “gods” and “goddesses”—immortal beings that have a beginning, but no natural end. These gods can be destroyed but they do not die of diseases or old age as humans do.
According to the oral origins of the Hebrew Torah (c. 1500 BCE) and the Bible, angels roughly correspond to these gods: they are probably gendered (most of the angels in the Bible are masculine, but some, such as in Zechariah 5:9, are probably feminine). They can be good or bad (“angel” usually means “good angel” whereas “demon” usually means “bad angel”), and they play an important, but often unseen, part in human history (for example, one fallen angel, Satan, took the form of a serpent and brought ruin to man, but another, Gabriel, was a messenger of good news).
“There stood before me an angel of God … and he said, ‘Do not be afraid …’”
The Bible, Acts 27:23–24
Angels are spirits that do not have physical bodies, but they probably have “spiritual bodies” of some sort. Regarding humans, Jesus said, “We will be like the angels,” and St. Paul said that we will be given “spiritual bodies,” suggesting that the distinction between angels and humans—at least in the next life—is much closer than some, such as the great Italian angelologist Thomas Aquinas (1225–74), have imagined.
Angels have profoundly shaped world culture. The vast majority of the world’s population believes in them, with some taking comfort in the belief that “guardian angels” watch over them; others fear the potential of bad angels as agents of evil. AB
c. 1500 BCE
Adam and Eve
Ancient Israel
The first two humans believed by Christians to have been created by God
According to the oral origins of the Hebrew Torah (c. 1500 BCE) and the Bible, Adam and Eve were the first two humans. Genesis 1 discusses the creation of mankind in general; Genesis 2 discusses the creation of man in particular: Adam first, then Eve. According to Genesis 2, God formed Adam from dust (adamah means “red earth” in Hebrew) and then breathed life into him. Subsequently, God made Eve from Adam’s rib; both are said to be made in God’s image.
Generally the consensus has been that the “breath” or “image” of God is the rational nature, soul, or spirit of the first humans. Like the angels, humans as rational souls have the ultimate capacity for rationality (that is, given the proper conditions, can think rationally), have the ultimate capacity for free will, have the ultimate capacity for emotions, and so on. Adam and Eve are distinguished from the animals by this image, yet their corporeal bodies are similar to that of the beasts. Moreover, while Adam and Eve are described as equal with regard to being rational souls or persons, they are also described as unequal in respect of their responsibility: Adam shares with God the masculine property of headship (that is, Adam is responsible to lead and protect his wife, the feminine).
“In the image of God He created them: male and female He created them.”
The Bible, Genesis 1:27
Given that most adherents of the Abrahamic religions believe that a historical Adam and Eve existed, their importance is obvious: as the first parents, they are the models for what humans were designed to be, and yet, at least according to Christians, represent what humans unfortunately usually become—sinful, unjust, or “fallen,” and in need of redemption. AB
c. 1500 BCE
The Devil
Ancient Israel
If God is a personification of good, the Devil personifies the absence of good, or evil
From the Greek diabolos, the word “Devil” is considered synonymous with the Hebrew satan, which itself means “adversary”—in particular, an adversary of God. The Devil is a Hebrew idea that originated in c. 1500 BCE and appears in the Hebrew Bible, though scholars disagree about exactly where in the text. Jews see no direct link between the serpent that tempted Adam and Eve in the Garden of Eden and the adversary of Job or the tempter of King David. However, for Christians and Muslims, Revelation 12:9 and 20:2 make it clear that the Devil is that same serpent or “dragon.” Allowed by God to go about his evil practices, he is “the prince of this world” (John 12:31) or “the god of this world” (2 Corinthians 4:4).
The Devil’s princeship or lordship over this earth probably has to do with his apparent status as an extremely powerful but “fallen” angel who took his war against God to earth and now holds dominion over many nonbelievers. Although direct statements about the Devil having been an angel are few in the Bible (“Lucifer” probably has nothing to do with the Devil), there are some that make this fairly clear. Jude 1:6, for example, talks about “angels who neglected their first estate,” and Revelation 12:7-9 talks about “the great dragon and his angels” who were cast out of Heaven.
“The Devil has come to you in great wrath, because he knows that his time is short.”
The Bible, Revelation 12:12
The Devil is unable to tempt many people at once (people wrongly attribute to him an omnipresence that only God has), but his global influence is undeniable. For billions he is a symbol of pure evil (reflected in phrases such as “That’s diabolical”), even though, theologically, pure evil cannot exist—evil being a deprivation of natural goodness. AB
c. 1500 BCE
Reincarnation
Unknown
Every human has been born, has lived, and has died before, and will do so again
The concept of reincarnation is both ancient and widespread. It refers to the belief that human beings have immortal souls that have been, and will be, joined with other bodies, past and future. Thus each individual human has lived before their current birth and they will survive death by being reborn as another. In some cases, for example in Hinduism and Buddhism, the cycle of reincarnation can be escaped by a process of coming to enlightenment or salvation. It has played a central role in Hinduism for thousands of years. In both Hinduism and its offshoot, Buddhism, reincarnation is part of a continuous cycle of birth, death, and rebirth known as samsara. The ultimate goal of those religions is to escape samsara, a process called moksha (or “letting go”). In ancient Greece, reincarnation appears in the sixth-century BCE work of Pythagoras and the fourth-century BCE philosophy of Plato. Meanwhile, in the Christian New Testament, Jesus Christ overtly claims that John the Baptist was the reincarnation of the Hebrew Prophet Elijah, and at least some sects of Islam (particularly in South Asia) also believe in reincarnation.
“Don’t grieve. Anything you lose comes round in another form.”
Jalal al-Din Rumi, Persian poet and mystic
From the late nineteenth century, interest in reincarnation—along with all things Eastern—enjoyed an explosion of popularity in Western culture. The writings of teachers, such as Madame Blavatsky of the Theosophical Society and Edgar Cayce, helped to spread the belief that people have lived many lives. General George S. Patton claimed to have been a soldier in many of the world’s great armies. And L. Ron Hubbard, the founder of the controversial Church of Scientology, also espoused a belief in reincarnation. APT
c. 1500 BCE
Avatar
India
The manifestation or incarnation of a deity in human or animal form
An Indian miniature from the seventeenth century, showing Vishnu in his first avatar as a fish. The story of the fish avatar describes Vishnu rescuing the first man from a great flood.
References to an entity known as an “avatar” are first found in the Vedas, the oldest scriptures of Hinduism, produced in India between c. 1500 and c. 500 BCE. The later Bhagavad Gita (c. 100 CE) refers in detail to the process of avataravada (the Hindu theory of incarnation), although the term “avatar” itself does not appear in Hindu scripture until later texts. An avatar is probably best described as the incarnation of a deity, but the term also conveys the wider notion that the gods have the ability to take any form and will descend to Earth to counteract a particular evil in the world (the Sanskrit word avatāra translates into English as “descent”). The complex Hindu god Vishnu is particularly closely associated with the notion of the avatar, and he is believed to have a number of incarnations.
“Whatever form pleases His bhaktas, is a form that He readily assumes.”
Vedanta Desika, Sri Vaishnava guru
There are ten principal avatars of Vishnu accepted across all of Hindu scripture. The avatars are said to take a variety of forms, including a fish, tortoise, dwarf, and boar. Two of the most significant manifestations of Vishnu are the human avatars of Rama and Krishna, whose stories are recounted in the central Hindu texts Ramayana (500–100 BCE) and Bhagavata Purana (c. 950). The ninth avatar of Vishnu is believed to be the Buddha, and the tenth avatar, Kalki, who has never yet manifested himself in the world, is predicted to appear at the end of Kali Yuga, the world’s current epoch, to restore balance and righteousness to humanity.
As the anticipated coming of the tenth avatar of Vishnu demonstrates, the notion of avatar is still relevant and significant to Hindus today. This belief in a future manifestation of God in human form can similarly be found across many of the world’s major religions and within more minor religions and cults. Thus, the idea of avataravada or incarnation serves in the contemporary world as a source of hope for a better, and more virtuous, future humankind. LWa
c. 1500 BCE
Sewer System
Minoa
A means of safely transferring human waste away from inhabited areas
The remains of a system of stone drains at the ancient Minoan settlement of Ayia Triada on Crete, Greece. The town and royal villa there were built between 1600 and 1100 BCE.
In the early days of humanity’s existence, when people lived as nomadic hunters and gatherers, the need for control of human waste was limited because people rarely stayed in an area long enough, or in large enough numbers, for long-term waste collection issues to arise. Eventually, however, people made the transition from a nomadic life to a sedentry one, which led to the establishment of villages, towns, and, eventually, cities. Inevitably, the collection of large numbers of people staying in the same place resulted either in significant accumulations of waste or contamination of neighboring waterways.
“Sanitary services … [are] indispensable for the functioning and growth of cities.”
Martin V. Melosi, The Sanitary City (2000)
From about 2500 BCE to 1500 BCE, ancient cities in the Indus Valley developed basic sanitation facilities, including toilets cleaned by flushing water, waste receptacles, and drainage systems. It was not until about 1500 BCE, however, that the Minoan civilization of Crete and the eastern Mediterranean created a complete sewer system featuring toilets, terracotta pipes to carry away waste, and large stone sewers to receive it. However, it was the Romans who developed the most complete sewer system of the ancient world. Their main innovation was the use of aqueducts to bring water to the cities, with clean water separated from the dirty water used to flush the large, stone-covered sewers. After the fall of the Roman Empire in 476, the world would not see such effective, technologically advanced sewer management again until the construction of centralized sewage treatment plants in the nineteenth century.
Sewer systems have been key to the development and expansion of large, permanent human settlements. Cities lacking a comprehensive sewer system are prone to outbreaks of diseases and epidemics. With the construction of sewers, however, humans can congregate into huge, relatively clean cities without exposing themselves to serious health risks. MT
c. 1500 BCE
Messiah
Ancient Israel
A divinely appointed king who will act as a savior for humanity
A second-century painting showing the Jewish prophet Samuel anointing David as the King of Israel.
The Hebrew word mashiach and the Greek word christos are translated in English as “messiah” and “christ,” respectively. The two terms refer to the same concept: a heroic leader or savior king who will rescue the people from their hardship. The Persian king Cyrus the Great, for example, is called a messiah in the Hebrew Bible because he helped the Jews to return to their homeland.
While there are many lesser messiahs or christs mentioned in Jewish and Christian texts, Jews and Christians distinguish these from the Messiah or Christ. According to ancient Israeli prophecy, the Messiah will be a king who rules at the end of history and establishes lasting peace for God fearers. Jesus identified himself as this messiah, which is why Christians speak of him not simply as “Jesus, a christ,” but “Jesus, the Christ,” or “Jesus Christ.” Jesus, Christians argue, fits the bill as the Messiah because not only was he a king in two senses (the earthly, Davidic sense, and the heavenly, son of God sense) but also he was the greatest savior or hero imaginable: he rescued people from death and separation from God.
The idea of a messiah is a profoundly important one for Jews and Christians—for Jews because the Messiah is yet to come, and for Christians because he has come and will return again at the end of earthly history. Without the hero king, Christians believe that they cannot be reconciled with God because only the Messiah can represent all people (as kings do) and take, sacrificially, their transgressions upon himself.
Not surprisingly, the idea of the messiah has inspired a myriad of “Christ types” in popular culture, such as Aragorn in J. R. R. Tolkein’s The Lord of the Rings (1954) and Superman (the leader of the Justice League). Conversely, it has also inspired many “anti-christ” figures, such as Johnny Rotten of the Sex Pistols and even Satan. AB
c. 1500 BCE
Scapegoat
Ancient Israel
An individual selected to bear the blame that properly belongs to a group
The Scapegoat (1854), by William Holman Hunt, was inspired by Hunt’s study of Jewish scripture.
Since the beginning of recorded history, man has made sacrifices to God or the gods. Most commonly, these sacrifices were appeasement sacrifices, used to express man’s sorrow to God or the gods for some transgression committed and to ask for reconciliation with, and the restored favor of, the divine figure. From about 1500 BCE, the ancient Israelites were one of many groups to carry out this practice, which they developed in the specific form of a scapegoat—literally, a goat who symbolically bore the sins of the people and was cast away from the camp in order to show God how sorry the people were for their acts of injustice.
The word and understanding of the concept of a scapegoat comes from the ancient Israelites, but they were not unique in the practice of using one. The ancient Greeks, for example, spoke of phramkoi, or a person who symbolically bore the problems of the people and was consequently exiled or even killed in order to please the gods and reverse the people’s collective misfortune.
Besides differing over the use of an animal or a human, the Israelites and Greeks also disagreed about the quality of the scapegoat. The Israelites usually reserved the best kind of animal for such activities, whereas the Greeks intentionally sought out less desirable persons—slaves and cripples—for the task.
The Christian Bible’s depiction of Jesus as “the lamb of God,” who died for the sins of the many, is another key example of this broad scapegoat-appeasement-atonement tradition. However, unlike the animal, who cannot literally take away the sins committed by man, or the imperfect human, who never volunteered for the role, Jesus has been seen by Christians as the perfect sacrifice and appeasement of God’s justice. The idea of a scapegoat was important in many ancient cultures, and is still very important today, largely because of Christianity. AB
c. 1500 BCE
Millennialism
Zoroastrian religion
The belief that the world will undergo a radical upheaval at the end of the millennium
A panel from an altarpiece (c. 1449), showing the Archangel Michael weighing souls at the Last Judgment.
In its strict sense, “millennialism” is a term used to refer to the expectation among a collective that the end of days will occur at the millennium, to be followed by a specified period of peace, harmony, and abundance for all. However, millennialism can more broadly be taken to refer to the conviction of any collective that takes an “end of days” approach to the unfolding of history. For example, in the Christian theological tradition, millennialism is accompanied by the belief that God will pass judgment on all of humanity for the purpose of meting out punishment and reward.
However, millennialist convictions have been part of human societies across the world for thousands of years. The concept of millennialism first appeared in the teachings of the Zoroastrian religion, founded in around 1500 BCE, and it has since featured in belief systems that include the Baha’i Faith and Mormonism. Often a belief in millennialism has been strong at the time that a new religion is founded, before fading as the religion becomes more established in society.
“Cast him [Satan] into the bottomless pit … till the thousand years should be fulfilled …”
The Bible, Revelation 20:3
Millennialism seems to satisfy a deep psychological need in the species to exact retributive justice. It also serves to keep groups focused on the things that matter, and perhaps even serves to strike fear into the hearts of those members of communities or societies who might otherwise commit venal or immoral acts. More importantly, however, millennialism can be interpreted as the ultimate form of revolutionary change. Hence the belief in it is, counterintuitively, a kind of progressive impulse that can be lauded against the impulse to conservatism that marks most of human history. JS
c. 1350 BCE
Monotheism
Ancient Egypt
The belief that there is one supreme, uncreated God who created everything else
Monotheism is the idea that there is a single supreme God, who, themself uncreated, created all else. The idea dates from around 1350 BCE, when the pharaoh Akhenaten (d. 1336 BCE) ruled that Egypt should worship only the sun, instead of having an array of gods.
Today, there are two forms of monotheism: theism, which sees the one supreme God as personal and concerned with their own creation; and deism, which sees God as impersonal and unconcerned. Most monotheists are theists, especially those of the Abrahamic beliefs. According to these traditions, the world’s first religion was a form of monotheism, but later on was perverted, fragmenting the personality and powers of God into many separate deities, thus giving rise to polytheism (the belief that there are many gods). Early on, monotheists did not oppose the belief that other gods existed; rather, they insisted that other gods—usually called angels—were created by God and therefore should not be worshipped. Contemporary theistic monotheism divides into either “one personality, one substance” (Judaism and Islam), or “three personalities, one substance” (Christianity).
“In the beginning, God created the heavens and the earth.”
The Bible, Genesis 1:1
Monotheism, in any of its forms, is distinct from both pantheism, which states that there is one God who is all that exists (matter and individual personalities being illusionary), and henotheism, which favors the exclusive worship of one god (like monotheism) but does not deny the existence of other potentially uncreated gods. From its earliest days in the ancient Near East and Egypt to the present day, monotheism has been hugely influential in world religions and philosophies. AB
c. 1300 BCE
Hero
Ancient Greece
A courageous warrior who performs great, usually moral, deeds
A Roman mosaic depicting Theseus, the heroic founder of Athens, overcoming the minotaur—a half-man, half-bull monster that lived in a labyrinth on Crete.
“Hero” comes from the Greek heros, meaning “great warrior.” The idea of a hero or great warrior is older than ancient Greek culture, but the Greek variation of the concept is arguably the best known. Scholars believe that the heroic actions described by Homer (fl. c. 850 BCE) occurred in the years 1300 to 1150 BCE.
“What makes a hero? Courage, strength, morality, withstanding adversity?”
Fyodor Dostoyevsky, Notes from Underground (1864)
The first heroes in Greek culture, like those in most ancient cultures, were usually gods or demigods who strove against some daunting foe or task. For example, Marduk, a lesser Babylonian god, fought the primordial goddess, Tiamat, to become the world’s first dragon slayer; and Hercules, a Greek demigod, endured twelve labors—including slaying the nine-headed Lernaean Hydra and obtaining the girdle of Hippolyta, queen of the Amazons—to achieve the title of “hero.”
In every culture, a hero is a warrior who has at least one moral virtue: courage. Marduk and Hercules, for example, both have this virtue, yet their motivation for courageous battle is often the pursuit of some nonmoral good, such as glory (Hercules) or power (Marduk). Over time it was recognized that a true hero is not only a courageous warrior but also one whose goals are justice and sacrificial love. Indeed, a true warrior does not necessarily have to be a warrior in the physical sense. In the West, for example, Jesus became a hero not simply because he fought (spiritually) against Satan and Death, but because he fought in the name of righteousness and love.
Many of the West’s more recent examples of heroes, such as the Knights of the Round Table or Superman and other members of the Justice League, further develop this idea. In the East, the notion of the hero, such as the Chinese Yue Fei—a military commander who fought 12 battles without a single defeat, before later being imprisoned and strangled—is that of a courageous warrior who self-sacrifices for the good of the group or nation. AB
c. 1250 BCE
Passover
Moses
A biblical event rich in symbolism for both Jews and Christians
Ultra-Orthodox Jewish men in Jerusalem select matzoth (unleavened bread) to be eaten during Passover. It is traditionally eaten on the first day of the holiday.
Sometime during the Late Bronze Age (1500–800 BCE), the Israelites—God’s “chosen people” according to the Bible—were enslaved in Egypt. Scholars have calculated the year of their exit from Egypt, termed the “Exodus,” to have been c. 1250 BCE, following the remarkable event of the Passover the same year.
“This is the day that I brought your divisions out of Egypt. Celebrate this day as a lasting ordinance for the generations to come.”
The Bible, Exodus 12:17
The book of Exodus states that God sent ten “plagues” against the Egyptians, trying, with increasing severity, to convince the pharaoh to release the Israelites. The tenth “plague” was the most severe of all, for this saw God “passing through” the Egyptian people, slaughtering every firstborn male, both human and animal, who was not sheltered behind a specially marked door. All those protected by doors that had been faithfully marked with lamb’s blood on God’s instruction were “passed over” by Him. When the firstborn of pharaoh himself was killed, the Egyptian king released the Israelites.
The Passover is also called the Feast of Unleavened Bread because, on the night before God passed over the faithful (the night of 14/15 Nisan, sometime in spring), the Israelites had been told to prepare certain foods to sustain them as they hastily left Egypt. A lamb was to be roasted, and unleavened bread (bread without yeast) was to be served. In commemoration of that fateful night, the Passover festival, which lasts between seven and eight days, is still the most celebrated of the Jewish festivals. After the Temple’s destruction in 70 CE, however, Jews ceased to partake in sacrificial Paschal lamb (from the Hebrew Pesach, meaning Passover).
Christians, too, celebrate the Passover, although for them the liberation celebrated is that of the faithful from sin by the sacrifice of Jesus, “the lamb that was slain.” Indeed, according to the Synoptic Gospels (Matthew, Mark, and Luke), it was the Passover that Jesus celebrated on his “Last Supper,” making all future Eucharistic celebrations even richer in meaning. AB
c. 1250 BCE
Dietary Laws
Moses
Religious guidelines for the preparation and consumption of foods
Dietary laws restricting the preparation and consumption of certain foods have been an important feature of many religious traditions. In the Jewish tradition, commands given by God through prophets for a proper diet (notably to Moses at Mount Sinai in the thirteenth century BCE) are a key element in the teachings of the Torah. Building on the Jewish tradition of religious texts, the Christian and Muslim traditions also include dietary laws regulating what believers can consume and how it should be prepared.
In the Jewish tradition, dietary laws are called kashrut, and any food that is prepared according to them is considered kosher. Dietary laws in Jewish, Christian, and Muslim teachings include both matters of hygiene in food preparation and complete prohibitions of some foods. Prohibited foods typically include “unclean animals,” which usually refers to pork, some kinds of seafood, and dead animals (those not specifically killed for consumption). In both the Jewish and Muslim traditions, additional guidelines are added to the laws defined in religious texts.
“ … whatever doesn’t have fins and scales you shall not eat; it is unclean to you.”
The Bible, Deuteronomy, 14: 9–10
Dietary laws also figure in Eastern religious traditions. Hinduism advocates a vegetarian diet, and in particular the avoidance of beef because cows are considered to be sacred. Vegetarianism is also a feature of Jainism, due to its central tenet of nonviolence.
The justification of dietary laws connects religious practices of faith to modern findings about dietary health. Modern analyses describe many connections between the healthiness of food consumption and specific foods that are restricted by these laws. TD
c. 1250 BCE
The Ten Commandments
Moses
A list of general ethical principles given to humankind by God
A seventeenth-century painting of Moses with the Ten Commandments, by Jacques de Letin.
The Ten Commandments—also known as the “Decalogue”—are a set of Judeo-Christian ethical principles that God is said to have dictated to Moses at Mount Sinai in the thirteenth century BCE. Unlike the other 603 commandments listed in the first few books of the Bible, the Ten Commandments are special, not only in how they came to be written (directly by God), but also in their timeless content. Roughly, they are: (1) Worship no gods but God, (2) Do not worship idols, (3) Do not blaspheme God’s name, (4) Remember the Sabbath, (5) Honor your parents, (6) Do not murder, (7) Do not commit adultery, (8) Do not steal, (9) Do not bear false witness, and (10) Don’t covet. In the New Testament of the Bible, Jesus claims that they can be broken down into two: love God first, and love other human beings as oneself. However, arguably, there is just one commandment: Do justice—treat each as it ought to be treated—wherein one ought to treat a superior as a superior (God as God), an equal as an equal (human beings as human beings), and a subordinate as a subordinate.
“In ten phrases, the Ten Commandments express the essentials of life.”
Krzysztof Kieslowski, film director
Whatever the case, none of the Ten Commandments is special for being uniquely known by the ancient Israelites: versions of most are found in most religions. This ubiquity is often explained by the presence of Natural Law, or general principles of justice, that can be known to all and that many believe to have been given by a divine power. This universality of the essential meaning of the Ten Commandments demonstrates their importance as a foundation for how people should live. AB
c. 1200 BCE
Trojan Horse
Agamemnon
A subversive force that is unwittingly invited in by its target
A fourteenth-century illumination depicting the Greek capture of Troy with the Trojan Horse.
The origins of the term “trojan horse” can be traced back to the Trojan War (dated by later Greek authors to have taken place in the twelfth or thirteenth century BCE), the legendary conflict between the early Greeks and the people of Troy in western Anatolia. Traditional accounts tell that Paris, son of the Trojan king, abducted Helen, wife of Menelaus of Sparta, whose brother Agamemnon then led a Greek expedition against Troy. The war lasted ten years, until Agamemnon developed a deception plan that used a huge wooden horse to infiltrate a raiding party into Troy. The Trojan Horse was left at the gates of Troy when the Greeks supposedly abandoned their siege. The Trojans were persuaded that the horse was an offering to Athena that would make Troy impregnable and, despite several warnings, it was taken inside the city walls. That night, warriors emerged from the horse and opened the city’s gates to the returned Greek army. The Greeks massacred Troy’s men and carried off its women, then sacked the city.
“Do not trust the horse, Trojans. Whatever it is, I fear the Greeks even bearing gifts.”
Virgil, The Aeneid (c. 20 BCE)
A “trojan horse” can refer to any kind of subversion introduced to a host from the outside; today it is most often used to describe malicious computer software. A computer user is induced to install an application that, once activated, makes the computer a “bot” or “zombie,” allowing a remote malware developer “backdoor” access to the infected computer. A trojan can subject the host computer to a variety of destructive or undesired activities, such as stealing data, acquiring personal passwords or account numbers, and erasing files. Trojans are frequently used for espionage, blackmail, fraud, and identity theft. BC
c. 1000 BCE
The Twelve Olympians
Ancient Greece
The most important gods and goddesses in the Greek pantheon
The Twelve Olympians, known in Greek as the Dodekatheon, were the most important gods and goddesses in the ancient Greek pantheon. Believed to dwell on Mount Olympus in Greece, they were central to the Greek mythology that developed from around 1000 BCE. According to Hesiod’s seventh-century BCE Theogony, the first written work on Greek mythology, they were third-and fourth-generation gods, all descending via the union of Kronos and Rhea, and, before that, the union of Ouranos and Gaia. Zeus, Hera, Poseidon, and Demeter are the third-generation gods of the twelve, and Dionysus, Apollo, Artemis, Hermes, Athena, Ares, Aphrodite, and Hephaestus are the fourth generation. All of the fourth-generation Olympians are children of Zeus, who is the king of the twelve.
“The Greek gods … did not love humans … and did not ask to be loved by them …”
Barry B. Powell, Classical Myth (1994)
Although the Olympians are best known by their Greek (or, later, Roman) names, most were not of Greek origin. The Greeks belonged to a larger Indo-European (Hittite) culture, which had its own mythology that itself borrowed heavily from even more ancient, largely Mesopotamian, sources. For example, the Hittite storm god was the chief deity and went by the name, “Sius,” which is etymologically linked with the Greek “Zeus”. Additionally, the war between the Olympians and the Titans is probably based on an ancient Mesopotamian story, which, in its biblical form, talks about a war between angels and demons in heaven.
Although few worship the Olympians any more, their influence remains. This can be seen in the arts in particular, where many great Western works use, or refer to, the Olympians and their achievements. AB
c. 1000 BCE
Yijing
China
An ancient Chinese text providing a system of divination based on Daoist philosophy
The Yijing (I Ching) is known in the West as The Book of Changes. One of the oldest Chinese texts, it is a system of divination grounded in Chinese geomancy and Daoist philosophy. It was probably written sometime between the third and fifth centuries BCE, though its origins date back to around 1000 BCE.
The Yijing is meant to be a guide to help people understand and manage change. The Dao (the way of nature) manifests itself through a complementary process of change between the forces of yin (negative or dark) and yang (positive or bright). If you can determine the interrelationship of yin and yang at a given time, you can understand how to best harmonize your actions with the Dao. The text of the Yijing is made up of oracular readings associated with sixty-four hexagrams. Each hexagram is composed of two trigrams made up of three lines each. These lines are either solid, representing yang (—), or broken, denoting yin (- -). A reading is obtained by casting yarrow stalks or flipping coins to determine the yin/yang nature of the lines, which combine to form two trigrams, which in turn make a hexagram. The reader then looks up that hexagram in the Yijing to determine the proper course of action to take under the circumstances.
“Knowing others is intelligence; knowing yourself is true wisdom.”
Yijing (I Ching or The Book of Changes)
The Yijing is one of the seminal texts of Daoism, influencing countless East Asian thinkers. It has been studied in Japan since the Tokugawa era (1603–1867), and the flag of South Korea displays the Daoist taijitu symbol surrounded by four trigrams. Western scholars, such as Gottfried Wilhelm von Leibniz and Karl Jung, have studied its theories of structural change. JM
c. 1000 BCE
Halloween
Britain and Ireland
The night when the veil between life and death is thinnest
The most probable origin of Halloween lies in the ancient Celtic festival of Samhain, which is believed to have first taken place around 1000 BCE in Britain and Ireland. Associated with pre-Christian Gaelic festivals and later linked with the Roman festivals of Pomona and Feralia, Samhain was held to mark the end of the harvest and the beginning of the new year on November 1. October 31 was known as “All Hallows’ Eve,” or “Hallowe’en,” and it was believed to be the time when the veil between life and death was thinnest and contact with spirits might be possible. In order to ward off or mislead these shades, turnips and gourds with frightening faces painted or carved on them were displayed, and bonfires were lit. Pumpkins were later substituted for turnips as they were more readily available—a tradition that has persisted to the modern day, though their purpose is now more decorative than apopotraic.
In the Catholic tradition, Halloween falls on the night before All Saints Day, or “Hallowmas” (November 1), a celebration to honor martyrs of the faith. Young men, often costumed as saints, would go from house to house requesting donations for the poor and praying for their souls, a practice known as “souling.” When Martin Luther initiated the Protestant Reformation on All Hallow’s Eve in 1517 and questioned the legitimacy of saints, he negated All Saints Day, and so for the next 300 years Halloween was primarily celebrated by Catholics and Anglicans (Episcopalians) and ignored or banned by Protestants. However, an influx of Scottish immigrants and Irish Catholic refugees in the nineteenth century led to a resurgence of interest in the holiday in the United States, and by the mid-twentieth century it was a popular fixture across the country. Today Halloween is largely a secular celebration, and its observance has spread worldwide. PBr
c. 800 BCE
All-powerful God
Ancient Israel
The view of God as a single being who has total power, benevolence, knowledge, and presence
The Three Shekel Ostracon—believed by many to be a forgery—is an eighth-century BCE pottery fragment bearing a receipt for a donation of three Shekels to the Temple of Yahweh.
An all-powerful god is not merely one who knows all and sees all, but one for whom nothing is impossible. The all-powerful deity rules the universe with perfect knowledge (omniscience) and ability (omnipotence), imparting divine justice impartially. Such a god is omnipresent, inherently truthful, and loving: a perfect being that no other can eclipse.
“God is that infinite All of which man knows himself to be a finite part. God alone exists truly. Man manifests Him in time, space, and matter.”
Leo Tolstoy, author
In ancient polytheistic religions, the gods often possessed spectacular, yet limited, abilities, while monolatrous religions recognized many gods but venerated one as supreme. The idea of a single, all-powerful supreme god appears to have originated in ancient Israel sometime around 800 BCE. The Israelite religion was originally polytheistic, and Hebrew scriptures contain references to multiple gods. However, during a period of conflict with their neighbors, the Israelites began to focus their worship on the god Yaweh, who came to be viewed as the main deity. Following defeat by the Assyrians and Babylonians, belief in Yahweh intensified, and by around the eighth century BCE Yahweh had come to be seen as supreme. Today, a majority of the world’s population is a member of a religion that holds that such a god exists.
For the believers in an all-powerful being, faith is a necessity and an ever-present Damoclean sword. Nothing you do, think, say, or feel is ever outside of God’s awareness, and there is no punishment, nor reward, that is outside the realm of possibility. In a world where God is all-powerful everything is as God intends it to be, and the work of humanity can be nothing but a reflection of God’s glory. The belief in such a god and such a world has compelled artists to create, warriors to kill and conquer, the pious to dedicate themselves to spiritual discipline, and the faithful to devote every moment of their lives to the being for which there can be none greater. MT
776 BCE
Ancient Olympic Games
Ancient Greece
A series of athletic competitions held near Olympia in Greece
A fifth-century BCE Greek vase depicting athletes participating in a foot race.
The ancient Olympic Games were first held in 776 BCE in Greece, though their mythical origins go back much further. There were three other major panhellenic athletics championships held at that time, but the most important was at Elis, near Olympia, which was held as part of a religious festival in honor of Zeus, king of the gods. Although war between Greek city-states was a frequent occurrence, traditionally a truce was declared to allow athletes from all cities to take part. The Olympiad, the four-year period between each Games, was used as a calendar to date other events.
At the first Games, the only event was the stadion, a race held over about 600 feet (190 m). Later, there were events such as boxing, wrestling, and equestrianism (including chariot racing). By 720 BCE most athletes took part naked, a tradition that one story attributes to a runner who lost his shorts during the stadion. Participants competed for the honor of their native city-state, but there were no team sports. Non-citizens, slaves, and women were not allowed to take part, though by the sixth century BCE women had their own separate event, the Heraia. The Games continued four yearly for almost twelve centuries, until 393 CE when the Christian Roman emperor Theodosius I banned them as part of a purge of pagan ceremonies and rituals.
“The winner … keeps happiness beside him sweeter than honey.”
Pindar, “Olympia 1” victory ode (c. 500 BCE)
The Olympic Games first encapsulated the idea that sport could contribute to peace and harmony between nations. The amateur nature of sport was emphasized through the awarding of a simple laurel wreath or olive branch to the winner. The glory of winning—even the joy of taking part—was enough. JF
c. 700 BCE
Continual Progress
Ancient Greece
The belief that humankind is in a continual state of self-improvement
The notion that human society should always be in a state of progression is as old as antiquity. All civilizations have, from their inception, held firm to the idea that continual moral, religious, and material improvements are an inevitable consequence of our inquisitive, aspirational natures. The Greek classicist Hesiod (fl. c. 700 BCE) spoke of human progression in Works and Days. In Aeschylus’s fifth-century BCE play Prometheus Bound, Prometheus is consigned to eternal punishment by Zeus for giving humans the gift of fire in a heroic attempt to free them from ignorance and enable them to pursue the loftier realms of art, culture, and learning. In The City of God in the fifth century CE, St. Augustine of Hippo wrote of the genius of man and of what he saw as assumed stages of our continual development. It was not until the seventeenth century and the French “Quarrel of the Ancients and Moderns,” that the idea of contstant progress in Western civilization was first debated.
“Men through their own search find in the course of time that which is better.”
Xenophanes, ancient Greek poet
On one side of the Ancients/Moderns argument were those who felt that contemporary society could never hope to match the achievements of the societies of classical antiquity; on the other, were those, such as the author Bernard le Bovier de Fontenelle (1657–1757), who argued that the seventeenth-century human mind was every bit as rich and imaginative as it was in the time of Aristotle and Homer. Humankind’s economic and material progression continued to be assumed in Adam Smith’s The Wealth of Nations (1776), and again in G. W. F. Hegel’s Philosophy of History (1831), in which he writes of our “impulse of perfectibility.” BS
c. 700 BCE
Aqueduct
Assyria
A man-made structure designed to take water from one location to another
Aqueducts were invented in the ancient world as a means to channel water to a desired location. One of the first major examples was built in 691 BCE to carry water some 50 miles (80 km) to the capital of the Assyrian Empire, Nineveh. The Romans developed the idea further, constructing aqueducts capable of carrying large quantities of water above the level of the surrounding land. Some of these are still standing, including the Pont du Gard at Nîmes, France, and the longest Roman aqueduct of them all, 87 miles (140 km) long, built at Carthage in present-day Tunisia. Medieval examples include a system of irrigation channels known as levadas that transferred water from the moist north of Madeira through the central mountain range to the dry, more populous south.
“A stream in the sky … the most impressive work of art I have ever seen.”
Walter Scott, describing the Pontcysyllte Aqueduct
Aqueducts were also used as part of the canal system for transporting goods during Britain’s Industrial Revolution (1760–1840). The network featured structures such as the 1,000-foot (305 m) long Pontcysyllte Aqueduct, which was constructed in 1805 to carry the Llangollen canal 126 feet (38 m) above the Dee valley—it is still the highest aqueduct in the world. Aqueducts have continued to be built into the modern day, such as the North Bay Aqueduct, built in 1988 as part of the California State Water Project.
The aqueduct harnesses the natural propensity of water to run along gullies and ditches. The genius of the Assyrians was to build human-made structures to take the water where they wanted it to go, to supply the cities where people wanted to live, rather than being dependent on natural sources. JF
c. 600 BCE
All is Water
Thales of Miletus
All matter is composed of water as its basic substance
Thales believed that water could be found in everything, even the rocks surrounding this waterfall.
The ancient Greek philosopher Thales (c. 624–546 BCE) posited the idea that “all is water” as one of the first abstract explanations for the origin of matter. Just as the abstract notion of a “building” encompasses everything from homes to arenas and temples, the idea that all is water similarly offers an idea that unifies all matter in the world as a single substance.
In Thales’s time it was commonly believed that matter was composed of one of four elements: air, earth, fire, or water. Yet for Thales this theory was incomplete. Through his observations of the world, he concluded that everything has, at least in part, some water in it. The oceans are obviously composed of water, but water is also found in animals, plant life, clouds, and even in rocks, which do not appear to be wet, yet have a degree of moisture to them. For Thales this led to the conclusion that of the four elements, water must be the most important. If water is present everywhere, he reasoned, it must be the fundamental substance present in all things.
“If there is magic on this planet, it is contained in water.”
Loran Eisley, anthropologist and philosopher
Even though the notion that there are only four elements and that water is the most important is no longer given credence, the idea is renowned more for what it did not presume, rather than for its accuracy. Prior to Thales and his aquatic thesis, the ancient Greeks approached questions of origins and explanations in terms of myths and gods. Thales’s idea turned such thoughts away from the supernatural, and instead toward an abstract notion that explains everything. In that idea lay the foundation for Greek philosophy, science, and naturalism, and all it bore. MT
c. 600 BCE
The Arch
Etrusca
A weight-supporting structure that literally opened up the design of buildings
Arched windows above the gateway of the Al-Ukhaidir fortress in Iraq, built in 775.
The first true arches appeared among the Etruscans in around 600 BCE, used in the construction of gates, bridges, and drains. Popularized by the ancient Romans, the arch was later adopted by the Arabs and the medieval architects of northern Europe.
Arches are constructed from a series of wedge-shaped blocks known as voussoirs, which are juxtaposed to form a variety of shapes: the most common design is a semicircle, but arches may also be segmental (less than half a circle), pointed (formed from two intersecting arcs), or other non-circular curves. The shape of arches has long been a prime characteristic of architectural style, with, for example, Romanesque rounded arches giving way to Gothic pointed ones.
The central, uppermost voussoir is known as the keystone. This stone is the crucial component: until it is firmly in place, the arch requires support from below, which is usually provided by temporary wooden struts.
The aesthetic appeal of arches is obvious; their practicality may be less apparent. In fact, arches are structurally more sound than any lintel (horizontal support) because they are made of small blocks rather than a single, large crossbeam. Since downward pressure forces the voussoirs together, arches are capable of bearing much greater loads than lintels, although single arches require buttresses on either side to resist the diagonal forces that would otherwise push the uprights apart and cause the structure to collapse. With a series of arches, however, the thrust of each span counteracts the thrusts of its neighbors, and hence they require only light support: this is the principle applied in Roman aqueducts and many bridges built between the Middle Ages (c. 500–c. 1450) and the height of the Steam Age in the eighteenth century. Arches are widely used in modern building construction because materials such as steel and concrete reduce the overall weight and further reinforce the stability of the structure. GL
c. 580 BCE
Infinity
Anaximander
An indefinitely great number or amount of something
A rock carving of the mathematical symbol for infinity, also known as the lemniscate symbol.
The idea of an endless amount of something is a concept that has perplexed most of intellectual history. In the sixth century BCE the ancient Greek philosopher Anaximander (c. 610–546 BCE) suggested that the four elements that make up the world (air, fire, water, earth) originated from an unlimited primitive substance called to apeiron: the infinite. In his Physics (350 BCE), Aristotle (384–322 BCE) later argued that there are two ways of understanding infinity—potential and actual—and that an actually infinite quantity is impossible (except for time). There is only an “illimitable potentiality of addition” and an “illimitable potentiality of division,” for “it will always be possible to find something beyond the total.”
Anaximander’s use of infinity as all of reality seems to imply some qualitative notion of significance, rather than an enumeration of discrete quantities. In contrast, Aristotle refers to quantities of parts: “the infinite … is not that of which no part is outside, but that of which some part is always outside.”
Aristotle’s quantitative treatment of infinity became the standard for studying the concept, and his conclusions were well respected until the discovery of calculus demanded a more thorough analysis. Gottfried Leibniz (1646–1716) and Isaac Newton (1642–1727), developing calculus independently of one another, disagreed about the role of infinitesimally small quantities in explaining integrals. Newton used the notion sparingly and developed the Method of Fluxions to avoid them; Leibniz made extensive use of them. Aristotle’s challenge to infinite quantities remained until German mathematicians Richard Dedekind (1831–1916) and Georg Cantor (1845–1918) proved the logical possibility of actually infinite sets in the late eighteenth century. Today, various conceptions of infinity remain hotly contested in both philosophy and mathematics. JW
c. 580 BCE
Fables
Attributed to Aesop
The idea of presenting criticism or advice indirectly in a simplified, fictional setting
A bronze statue from between 330 and 100 BCE, that is believed to depict Aesop holding a papyrus scroll. Aesop is traditionally described as being very ugly.
A fable is a narrative, in prose or verse but usually simple and brief, that is intended to convey a moral lesson. Fables frequently involve nonhuman characters—animals (real or mythic), plants, artifacts, forces of nature, and so on—that are represented as having human attributes. Fables are a common form of folk literature; the best-known fables of the Western world are credited to the legendary figure Aesop, who is supposed to have been a slave in ancient Greece sometime between 620 and 560 BCE.
“Fable is more historical than fact, because fact tells us about one man and fable tells us about a million men.”
G. K. Chesterton, “Alfred the Great” (1908)
In the ancient classical world, fables were not considered as fare for children nor as works of literature in their own right. Rather, they were used as vehicles for indirect—and thus carefully polite—criticism and persuasion. For example, Xenophon (c. 430–354 BCE), in his Memorabilia (c. 371 BCE), describes Socrates advising a citizen named Aristarchus to tell his ungrateful relatives—to whom he had provided capital for a business and who are now accusing him of idleness—the fable of the dog and the sheep, concluding, “Tell your flock yonder that like the dog in the fable you are their guardian and overseer.”
Interest in fables remained high through classical antiquity, the Middle Ages, and the Renaissance, with collections of fables—typically ascribed to Aesop—serving as the basis for rhetorical textbooks and literary works. Jean de La Fontaine (1621–95) produced Fables (1668–1694), which are perhaps the most best-known original fables in modern times.
As literary tastes developed in sophistication, fables increasingly became the province of humorists such as George Ade and children’s writers such as Dr. Seuss—although the defamiliarizing effect of fables, with the artistic form being used to stimulate fresh perception of a familiar subject, is still deployed in books such as George Orwell’s criticism of Stalinism, Animal Farm (1945). GB
c. 550 BCE
Biological Warfare
Ancient Assyria
Using living organisms as a weapon to kill, incapacitate, or harm an enemy
A ninth-century BCE stone bas-relief from the palace of Ashurnazirpal II in Nimrud, Mesopotamia, showing an Assyrian soldier cutting the rope on a pail above a well in the besieged city.
Biological weapons are designed to take advantage of the dangerous and deadly properties of natural toxins and pathogens. Because biological weapons have the potential to reproduce themselves—sometimes within a human or animal host—they also have the potential to be incredibly destructive, and thus extremely effective.
“The world really is just one village. And our tolerance of disease in any place in the world is at our own peril.”
Joshua Lederberg, molecular biologist
The earliest known use of biological weapons occurred in the sixth century BCE, when ancient Assyrian warriors poisoned the wells of their enemies with a species of ergot fungus that naturally grows on rye grain. In 190 BCE, the Carthaginian military commander Hannibal used jars full of venomous snakes as a weapon in naval warfare when he launched them into enemy ships to cause panic. In the fourteenth century CE, Tartar warriors took the corpses of comrades who had died of the plague and launched them into besieged cities. By the time germ theory allowed scientists to more easily identify the cause of diseases, and thus potentially create more effective biological weapons, the international community declared the use of such weapons illegal in the Geneva Protocol of 1925. Nevertheless, the Japanese successfully used biological warfare against the Chinese people during the Sino-Japanese War (1937–45) and World War II (1939–45).
By 2011 biological weapons had been disavowed by at least 165 countries, and today they are primarily seen as potential terrorist weapons. Isolated incidents of biological weapon use have occurred, though their influence has not been significant. Nonetheless, the potential danger that such weapons pose continues to be of significant concern to governments around the world. Most modern governments maintain that they have no offensive biological weapons programs, but studying natural pathogens and biological agents for defensive purposes remains a part of many state defense programs. MT
c. 55o BCE
Philosophical Hedonism
India
Human actions should be motivated by the pursuit of pleasure
How should we live? We pursue education so that we can get a career, so we can make money, so we can buy things, so we can … what? Presumably, we do not want a career or money just to have a career or money, but in order to be happy. The idea that the morally good or “right” motivation for acting is the pursuit of pleasure and avoidance of pain is called hedonism.
Hedonism can be traced to the sixth-century BCE Indian philosophy Cārvāka, but its most influential form was in the ancient Greek teachings of Aristippus of Cyrene (c. 435–356 BCE) and Epicurus (341–270 BCE). Regarding pleasure as the only valuable pursuit, hedonism sets itself apart from other widely accepted moral views, such as that a person has moral duties to do certain things regardless of whether they make them happy (deontology) and that a person has obligations to do whatever God commands, irrespective of the impact on their own welfare (divine command theory).
“[We] do everything for the sake of being free of pain and mental distress.”
Epicurus, in a letter to Meneoceus
However, philosophical hedonism should be distinguished from the mere pursuit of pleasure. While some accuse hedonists of advocating a life of debauchery, philosophical hedonists reject this characterization. Epicurus argued that while every pleasure is good, “it does not follow that every pleasure is a choice worthy without qualification.” He extolled traditional virtues of self-sufficiency, prudence, and even a healthy diet, since they too contribute to a lifetime of happiness. Though hedonism was rejected by many influential moral philosophers (such as Thomas Aquinas and Immanuel Kant), it continues to play an influential role in contemporary moral and political thought. JW
c. 550 BCE
Daoism
Laozi
The attainment of tranquility by living in harmony with the natural world
An illustration (c. 1700) of the three sages of T’ai Chi, a martial art derived from Daoism.
Daoism is a Chinese philosophical and religious tradition that originated with Laozi (fl. sixth century BCE) and was later expanded on by Zhuangzi (c. 369–286 BCE). It is a type of naturalism that encourages human beings to live in harmony with the Dao, the natural world that is the basis of all existence. The Dao manifests itself as de, the particular things that we see in the world, which contain within them certain proportions of yin (negative or destructive forces) and yang (positive or creative forces). Everything contains some proportion of yin and yang: for example, we can see things only when there is both light and shadow, and music exists as a combination of notes and rests.
If there is an overabundance of yin or yang, the Dao has a tendency to balance itself by reverting to the opposite extreme. Daoists therefore practice wu wei, or “non-interference”: rather than acting against nature, a person should instead follow the natural flow of events and turn them to their own advantage (like a surfer moving in harmony with a wave). Politically, this results in a minimalistic approach to government: a good ruler should educate the people so that harsh laws are unnecessary.
“The sage helps the natural development of all things and does not dare to act.”
Laozi, Daodejing 64 (sixth century BCE)
Daoism has had an enormous influence upon East Asia, particularly China and Taiwan. Like Confucianism, its core philosophical tenets are deeply ingrained in the culture. Daoist metaphysics influenced Mahayana Buddhism, which led to the creation of Chan (Zen) Buddhism. Core principles of Daoism have been a cornerstone of the martial arts (for example, Bruce Lee’s Tao of Jeet Kune Do). JM
c. 550 BCE
Ju
Laozi
True strength is gained by yielding to force, rather than resisting it
A Song Dynasty (960–1279) statue of Laozi, the Chinese master philosopher and father of Daoism.
The principle of ju (rou in Chinese) was first articulated by the sixth-century BCE Chinese Daoist philosopher Laozi in his classic philosophical text, the Daodejing. Ju can be translated as “suppleness,” “yielding strength,” or “gentleness,” and refers to the ability to adapt dynamically to pressure so as to overcome it.
The Dao (the natural world) can be thought of as similar to a rubber band stretched taut between two fingers: any attempt to pull it builds up potential energy that will eventually cause it to snap back to the opposite position. To avoid this reversion, a Daoist should not directly oppose force, but should instead flow with it and turn it to their advantage. Two classic examples illustrate this concept: bamboo and water. In the high winds and freezing rain of an ice storm, the rigid branches of trees will snap off, while the more flexible stalks of bamboo survive by bending under the pressure and then springing back. Water flows around a rock thrown into a pond, yet has the power to slowly erode the stone or shatter it with a crashing wave.
Ju is a core underlying principle in the Japanese and Chinese martial arts, particularly the disciplines of judo and jujutsu (jiu-jitsu), which are both named after the concept. It requires exponents to be ready to adapt themselves to whatever maneuver their adversary approaches them with. When a small judo practitioner is shoved by a larger opponent, for example, the practitioner does not shove back, but instead pulls their opponent forward, redirecting the opponent’s force and throwing the opponent to the ground. It is only by yielding to their opponent’s force that the practitioner is able to triumph. Bruce Lee also made extensive use of ju in his philosophy of Jeet Kune Do, encouraging the martial artist to fit in with each opponent by adapting a “way of no way” that eschews set forms. JM
c. 550 BCE
Wu Wei
Laozi
Maximum efficiency comes as a result of minimal interference
A group of men practicing T’ai Chi in front of the Temple of Heaven in Beijing, China.
The concept of wu wei was first articulated by the Chinese Daoist philosopher Laozi (fl. sixth century BCE) in his Daodejing, and then later developed by Zhuangzi (c. 369–286 BCE) in the fourth century BCE. Wu wei can be translated as “non-interference” or “noncoercive action,” and the concept refers to letting things take their natural course with minimal intervention.
Wu wei does not mean total inaction; rather it is a way of maximizing efficiency by minimizing interference. The goal of Daoism is to fit in seamlessly with the workings of the natural world, and wu wei is the method through which this harmony is achieved. Zhuangzi gives the example of Cook Ding, who is so skilled as a chef that he is able to butcher a whole ox without dulling the edge of his knife. Just as understanding anatomy helps Cook Ding to carve up his ox, understanding nature helps the Daoist to harmonize his actions with the workings of the natural world. Because the de (the particular way that the negative and positive forces of yin and yang are focused in a given context) is constantly shifting, the Daoist must learn to ride the flow of the Dao in the same way that a kayaker avoids collision by navigating the smoothest path through turbulent waters.
The concept of wu wei has been particularly influential in government, environmental philosophy, and the martial arts. Politically, a good ruler should focus upon educating the population rather than trying to exert control: society functions best when the virtues and desires of the people are in harmony with nature. In environmental philosophy, wu wei rejects consumerism in favor of minimizing the negative environmental impact that humans have upon the natural world. Martial artists strive to fit in with the movements of their opponents, using each attacker’s energy against him rather than opposing it directly. JM
c. 534 BCE
Drama
Thespis
A method of performance in which a story is acted out for entertainment
The third-century BCE theater at Epidaurus, Greece, is still used for dramatic performances today.
“Drama” refers to a performing art, but the term also encompasses the associated body of literature that the performances are based on. While aspects of theater, such as performance, costumes, and storytelling, were present in all ancient cultures, Thespis, a sixth-century BCE singer of liturgical poems in ancient Greece, is credited by Aristotle (384–322 BCE) with inventing a style in which the singer played the various characters in the story with the aid of differing masks. Thespis staged the first tragedy—and thus the first drama—in c. 534 BCE.
From these origins, drama swiftly developed in the Western world. Aeschylus (c. 525–456 BCE), whose The Persians is the earliest surviving play, is credited with introducing a second actor to the performance; his rival Sophocles (c. 496–406 BCE) is credited with introducing a third. Diverse dramatic traditions continued to flourish in different times and places, with influential dramatists including Christopher Marlowe, William Shakespeare, and Ben Jonson in Elizabethan and Jacobean England; Pierre Corneille, Molière, and Jean Racine in seventeenth-century France; and Henrik Ibsen, August Strindberg, and Anton Chekhov in nineteenth-century Europe. Distinct genres of drama, techniques of performance and direction, conventions in costume and scenery, and methods of incorporating music, dance, and poetry all evolved alongside dramatic literature—even a form of drama eschewing performance altogether (the “closet drama”) emerged. In the East, meanwhile, independent dramatic traditions such as Noh and Kabuki in Japan, Chinese opera, and Kathakali in southern India developed.
Through much of history, drama took place in the theater. But in the twentieth and twenty-first centuries, drama was extended from the theater into new media, including radio, cinema, television, and the Internet. This adaptability and widespread prevalance shows the continuing vitality of the art form. GB
c. 534 BCE
Tragedy
Thespis
A drama about suffering that can offer emotional release for its audience
A marble relief from the first century BCE, showing the dramatist Euripides holding a mask of Heracles.
A tragedy is a drama—or by extension a narrative—culminating in a disastrous end for the protagonist. In a typical tragedy, the protagonist is admirable but flawed; the course of events leading to the conclusion is presented as inevitably resulting from his or her character and situation; and the presentation is serious and solemn, as befits a depiction of the grandeur and misery of the human condition. Thespis (fl. sixth century BCE) of ancient Greece is traditionally considered the inventor of tragedy, and thus also the inventor or drama, after staging the first recorded tragedy in c. 534 BCE.
In ancient Greece, tragedy was one of the major genres, and the tragedies of Aeschlyus, Euripides, and Sophocles are still regarded as canonical. Aristotle’s discussion of tragedy in the Poetics (c. 335 BCE), based on these works, is still influential. The tragedies of the Roman stage, particularly those of Seneca, were more influential in Renaissance Europe, however. Important modern writers of tragedy include Elizabethan playwright William Shakespeare and seventeenth-century French dramatist Jean Racine. Additionally, there have been attempts to replicate the dramatic functions of tragedy in different art forms, including poetry (such as Lord Byron’s Manfred, 1816–17) and the novel (such as Thomas Hardy’s Jude the Obscure, 1895).
From the eighteenth century onward, tragedy of this nature was increasingly displaced in the theater in favor of its intellectually less demanding cousins: domestic tragedy, melodrama, and tragicomedy, although there are arguable exceptions, such as the twentieth-century plays of Arthur Miller and Eugene O’Neill. The critic George Steiner argued, in The Death of Tragedy (1961), that after Shakespeare and Racine, “the tragic voice in drama is blurred or still.” Steiner attributed the dwindling impact of tragedy to the rise of the optimism associated with the Age of Enlightenment. GB
c. 530 BCE
Pythagorean Theorem
Pythagoras
An equation that explains the relationship between the sides in a right-angled triangle
An Arabian manuscript from 1258, discussing Pythagorean theorem. It was authored by the Persian philosopher, scientist, and mathematician, Nasir al-Din al-Tusi.
Expressed as the equation A2 + B2 = C2, the Pythagorean theorem demonstrates the relationship between the sides of all right triangles. In a right triangle, meaning a triangle in which one angle is 90 degrees, the value of the squared length of the longest side of the triangle will always be equal to the sum of the squared lengths of the other two sides.
“Number is the ruler of forms and ideas, and the cause of gods and demons.”
Pythagoras
Even though there are ancient Babylonian tablets that express the Pythagorean theorem in terms of the measurements found in right triangles, the ancient Greek mathematician Pythagoras of Samos (c. 570–c. 495 BCE) is widely credited with identifying the equation. No original written works by Pythagoras remain to prove this definitively though, and it is impossible to determine if the idea was his alone. However, it is claimed that upon discovering his eponymous mathematical truth, Pythagoras offered a hetacomb to the gods (a public sacrifice of 100 oxen). He and many of his contemporaries believed that mathematics were so connected to the divine that they formed a religious movement around his discovery. However, when he and his followers—who dubbed themselves the Brotherhood of Pythagoreans—moved to Croton, a Greek colony in southern Italy, their controversial beliefs led to public outcry, forcing them to flee for their lives.
The Pythagorean theorem is not merely a deduction about right triangles and the relationship of their sides. It has since become a foundational mathematical theorem, and an indelible part of the modern world. Through it, Pythagoras showed that the abstract notions of numbers and mathematics correspond to humanity’s everyday perceptions of the real world. His theorem revealed that nature had a structure, a structure composed of equations, and those equations were something humanity could comprehend, measure, and use. MT
c. 530 BCE
Pythagorean Tuning
Pythagoras
A tuning system derived from the intervals of the perfect fifth and perfect fourth
A woodcut from 1492 showing the biblical figure Jubal as the discoverer of music (top left) and Pythagoras studying the relationships between intervals in different instruments.
Pythagorean tuning was the brainchild of Greek philosopher Pythagoras of Samos (c. 570–c. 495 BCE), who formalized a theory in which musical intervals could be derived from the simple ratios of string lengths. The tuning system utilizes the intervals of fourth (4:3) and fifth (3:2) to generate the scale. The difference between the two (9:8) makes up the whole tone, and a minor second is the remainder of a fourth minus two whole tones, the Limma 256:243. As a consequence, six Pythagorean whole tones, which would logically make up an octave (2:1), are larger by an interval called the Pythagorean comma (531,441:524,288), a small interval but noticeable to the human ear.
“Take but degree away, untune that string, and hark, what discord follows!”
William Shakespeare, Troilus and Cressida (1602)
Despite its purely theoretical origin and practical problems—chords including thirds sound out of tune, for example—Pythagorean tuning was favored throughout the medieval period and is still used today in shaping melodies on non-keyboard instruments, such as the violin. To many musicians, the larger Pythagorean whole tone and smaller half tone sound better than the ones of equal temperament (the scale used at keyboards today, in which the octave has been divided into twelve equal parts and which makes it possible to play harmonies in all keys).
The obsession of early music theorists with the size of scale steps has had a key influence on music, inspiring numerous attempts throughout history to create the perfect intonation system: from compromises to create perfect harmonies in a limited number of keys, to contemporary systems utilizing tensions and dissonances for expressive effects, such as the use of the syntonic comma—the difference between a just intonated major third and the Pythagorean third—in the music of U.S. composer Ben Johnston. Other attempts include keyboards with more than one key for certain pitches, to enable the tuning to be adjusted to different harmonies. PB
c. 530 BCE
Vegetarianism
Pythagoras
A conscious decision not to eat meat and other animal products
Vegetarianism is the principled refusal to eat meat. The ancient Greek philosopher Pythagoras (c. 570–c. 495 BCE), who required members of his philosophical society to abstain from eating meat, is often viewed as the first important vegetarian. Before the word “vegetarian” was coined in the 1840s, non meat-eaters were often called “Pythagoreans.”
What is wrong with eating meat? Vegetarians have offered various criticisms for the practice, contending that eating meat is cruel (often, from the twentieth century onward, citing the methods of industrial meat production), unethical (often citing recent work in practical ethics, particularly by Peter Singer), unhealthy (often citing the fact that vegetarians tend to be less obese and less likely to die from ischemic heart disease), unnatural (often claiming, wrongly, that prehistoric humans subsisted on a vegetarian diet), environmentally unfriendly (often citing the relative inefficiency of meat production), and in conflict with the tenets of religious faith (sometimes citing reincarnation, as with the ancient Pythagoreans and several modern Hindu sects). There are also different degrees of vegetarianism: for example, ovo vegetarians will eat eggs, lacto vegetarians will eat milk, and ovolacto vegetarians will eat eggs and milk, whereas vegans forego all products derived from animals and fruitarians furthermore forego all plant foods that involve killing the plant, eating only fruits, nuts, and seeds. Vegetarianism is typically associated with a similar refusal to use products derived from animals, such as leather and wool.
The modern vegetarian movement is dated to 1847, when the Vegetarian Society was founded in Great Britain. In Western countries, vegetarianism has been increasing since the 1960s, and due to continuing and intensifying ethical and environmental concerns, it is likely to flourish in the future. GB
c. 530 BCE
Nirvana
Siddhartha Gautama (Buddha)
The state of enlightenment achieved when karma and craving are extinguished
An eighteenth-century painting depicting a reclining Buddha during the transition from this world to nirvana.
The concept of nirvana originated with the historical Buddha, Siddhartha Gautama (c. 563–483 BCE), during the sixth century BCE. Though Buddhism is grounded in Hindu philosophy, it is a heterodox approach to spiritual cultivation that eschews some of the core tenets of Hinduism, and the belief in nirvana epitomize s this philosophical shift.
The Sanskrit word nirvana literally translates as “snuffing out,” and refers to the eradication of the karma (a force generated by a person’s actions) and craving that bind an individual to the cycle of rebirth (samsara). One of the central ideas of Buddhism is anatman, which is a rejection of the Hindu notion that there is a permanent, essential self (the atman, or soul) that is one with ultimate reality. Because there is no self to whom karma can attach, Buddhism explains the transfer of karma using a candle metaphor: just as an old candle (one’s dying body) begins to peter out, its flame (karma) ignites the wick of a second candle (an infant’s body). Thus, karma is able to transfer without having to attach to a self. Nirvana occurs when one has reached enlightenment and eliminated ignorant attachment to false ideals, which snuffs out karma and allows one to exit the cycle of rebirth. Theravada Buddhism divides nirvana into two stages, the awakening of the arhat (enlightened being) and parinirvana (the exit from samsara upon death), though practitioners in the Mahayana tradition take the bodhisattva vow to not enter nirvana until all sentient beings have been awakened.
Though nirvana signifies a key split from Buddhism’s Hindu origins, the concept has influenced Hinduism and is present in the 700-verse scripture the Bhagavad Gita, in which Krishna uses the concept of brahama nirvana to describe the state of liberation in which karma is eradicated and one has achieved union with Brahman (ultimate reality). JM
c. 530 BCE
The Four Noble Truths
Siddhartha Gautama (Buddha)
The Buddhist path to enlightenment lies in freedom from desire
According to traditional biographies, Siddhartha Gautama (c. 563–483 BCE) was a prince from northern India who renounced his privileged life to seek spiritual awakening. At first he followed the ascetic tradition of Indian holy men, mortifying the flesh with extreme fasting and other hardships. After seven years of such striving, and now so emaciated as to be barely alive, he came to sit under the Bodhi Tree at Gaya. One evening, he accepted a little nourishing food, relaxed, and felt a profound change overtake him. After sitting through the night, at dawn he achieved a state of perfect understanding, becoming a Buddha (enlightened one).
Siddhartha’s insight into the nature of reality was later formulated as the Four Noble Truths of Buddhism. The first truth is that life, as usually lived, is suffering (duhkha)—frustration of desire, losing what we want, having to live with what we do not want. The second truth is that suffering results from clinging to the illusory things of the world with desire or hatred, striving for one or fleeing another. The third truth spells out the solution: the achievement of nirvana, the state of enlightenment in which the world can be seen for the delusion that it is. Freedom from illusion will mean freedom from attachment to illusory desires. The final truth sets out the practical path to enlightenment—dharma—including right understanding, right speech, right action, and right concentration.
“Birth is suffering, aging is suffering, illness is suffering, death is suffering.”
The Pali Canon of Buddhist scriptures
In the context of the traditional Indian belief in reincarnation, nirvana is seen as the escape from the endless cycle of death and rebirth. Freedom is found in the realization that even the self is an illusion. RG
c. 530 BCE
Music of the Spheres
Pythagoras
The orbits of the planets emit different tones that together create a musical harmony
An engraving of Apollo presiding over the Music of the Spheres (1496) by Franchino Gaffurio.
Attributed to Pythagoras (c. 570–c. 495 BCE), “music of the spheres” is the idea that acoustical relationships between the planets are governed by their distance from Earth and their rotational speed, just as arithmetic divisions of a string generate different musical intervals. Plato (c. 424–c. 348 BCE) described it as eight Sirens (one for each body of the solar system) orbiting around a turning spindle, each “hymning” different tones that together form one harmony. Around 900 years later, the philosopher Boethius (c. 480–c. 524) brought back these ideas with his notion of Musica mundana (the music of the world), to which he also added Musica humana (the harmony of the human body).
This connection between music, mathematics, and astronomy had a profound impact on history. It resulted in music’s inclusion in the Quadrivium, the medieval curriculum that included arithmetic, geometry, music, and astronomy, and along with the Trivium (grammar, logic, and rhetoric) made up the seven free arts, which are still the basis for higher education today. The term “music of the spheres” also appeared during the eighteenth century in connection with psychoanalysis, in which it was used as a metaphor to explain the relationship between self and the world.
“The melody … is made by the impulse and motion of the spheres themselves.”
Cicero, Somnium Scipionis (c. 50 BCE)
Modern physics has renewed interest in the concept. Using music as a metaphor for string theory, physicist Brian Greene (b. 1963) compared resonating particles to the different tones generated by a violin string. Most recently, NASA’s Chandra X-ray Observatory detected sound waves from a black hole with a frequency of fifty-seven octaves below middle C. PB
c. 530 BCE
Arhat
Siddhartha Gautama (Buddha)
A being who has attained enlightenment, characterized by the qualities of profound wisdom and compassion
A painting of an arhat sitting and reading by Guanxiu, a celebrated Buddhist monk, painter, poet, and calligrapher of the ninth century.
The notion of the arhat was developed by Siddhartha Gautama (c. 563–483 BCE), the historical Buddha, during the sixth century BCE. It was expounded in the Lotus, Heart, and Diamond Sutras and remains a core tenet of Buddhist philosophy.
“Can an Arhat have the thought, ‘Have I attained the Way of the Arhat?’”
The Diamond Sutra
Arhat literally means “holy one” or “worthy one” and refers to a person who has attained enlightenment through diligent practice. The Buddha taught that the world is characterized by duhkha, which can be translated as “suffering,” “sorrow,” or “profound unsatisfactoriness.” Our ignorant attachment to false ideals causes us to act selfishly, which results in karma (a force generated by a person’s actions), which in turn binds us to samsara, the cycle of rebirth. However, if we cultivate ourselves with regard to wisdom, compassion, and meditation, we can wipe out the ignorance and see reality as an impermanent process in which all things are fundamentally interdependent. Once awakened to this truth, a person attains enlightenment and eradicates all karma, attaining arhat status. When arhats die, there is no karma to bind them to samsara, so they enter a state of blissful oblivion, free from the endless torment of reincarnation. Arhats are characterized by profound wisdom and compassion, and they typically act as teachers for other Buddhists before they die and forever exit the cycle of rebirth.
In the Theravada tradition, the arhat is considered to be the goal of spiritual cultivation. The term is also used in Jainism to represent a spiritual being who embodies purity. Mahayana Buddhism does not embrace the notion that the arhat is the final stage; a person who has attained enlightenment is expected to take the bodhisattva vow to remain in samsara until all sentient beings are awakened. This divergence of opinion remains one of the key differences between the Theravada and Mahayana traditions. JM
c. 510 BCE
Comedy
Ancient Greece
A lighthearted narrative intended to amuse rather than enlighten the audience
A comedy is a drama—or a narrative—intended to amuse or delight. It generally features a happy ending for its protagonist, is typically located in a familiar setting and uses colloquial diction, and often flatters the audience with a sense of superiority to the characters. The tradition of dramatic comedy arose in c. 510 BCE in ancient Greece, around the time that the foundations of democracy were being laid. Political and social satire was a key feature of early comedy, together with slapstick action and scatological and sexual jokes.
In ancient Greece, the greatest comedies were those of Aristophanes (c. 450–c. 388 BCE), whose plays combined philosophical speculation and political satire, and Menander (c. 342–c. 292 BCE), whose “New Comedy” style, involving young lovers and stock characters, was influential on Roman playwrights such as Plautus and Terence, and later on William Shakespeare and Lope de Vega in the sixteenth and seventeenth centuries. In modern drama, comedy has proliferated and diversified with bewildering variety. Important types of comedy are often identified as romantic comedy, with lovers overcoming misunderstandings and obstacles to achieve a happy union; satiric comedy, with criticisms of persons or practices underlying the humor; the comedy of manners, with witty dialogue in a sophisticated milieu; and farce, with sexual innuendo, physical humor, and unsubtle wordplay. These examples are by no means exhaustive. With the advent of cinema, there came screwball comedy; with the advent of television, there came situation comedy. In modern times, comedy also succeeded in colonizing newer art forms, such as novels and essays.
Unlike tragedy, its counterpoint, comedy is still going strong in the twenty-first century, both in drama and elsewhere. It has even inspired new art forms, such as stand-up comedy. GB
c. 508 BCE
Freedom of Speech
Ancient Greece
The right to speak freely without fear of reprisals from others
Freedom of speech—the ability to express ideas without fear of violent repercussions—predates many other “natural” human rights, and arguably became a political issue in ancient Athens when discussions about how to structure democracy were underway in c. 508 BCE. Since that time, freedom of speech has been debated and legislated in almost every Western political context for almost every generation. The struggle to keep speech free will likely never cease so long as communication technologies and voice boxes allow people to put their words out into the world.
Democracy relies on a plurality of ideas and the effective communication of those ideas, so the Athenians were invested in making sure that citizens—at that time aristocratic males—had the ability to speak without fear of reprisal from their dissenting neighbors. The societies that rose and fell subsequent to Athens also encoded freedom of speech to varying degrees. The better laws, such as England’s Bill of Rights in 1689 and the United States’ Bill of Rights in 1789, declared freedom of speech for all citizens. Efforts to censor voices arguing against standing powers proved dangerous to the wellbeing of the society, and the persistence of the negative consequences of censorship led to the encoding of freedom of speech in the Universal Declaration of Human Rights in 1948.
“Give me the liberty to know, to utter, and to argue freely according to conscience …”
John Milton, poet
Today, freedom of speech is a slippery freedom to realize completely, since there are laws against libel, slander, obscenity, and copyright violation. However, in an increasingly connected world, discussions about freedom of speech remain pertinent. MK
508 BCE
Democracy
Ancient Greece
A state in which citizens can vote to determine its policies, leaders, and laws
The bema of Pnyx, the speakers’ platform for democratic assemblies held in ancient Athens from 508 BCE.
A democratic state is one in which citizens have the ability to vote in order to determine laws, choose elected officials, or otherwise influence governmental activity. This definition includes several types of governments, such as those in which citizens have direct influence over policies, laws, or elections, and those in which voters choose representatives to make such decisions.
Democracy’s roots extend to the ancient Greek city of Athens, where in 508 BCE the form of government first appeared under the leadership of Cleisthenes (c. 570–after 508 BCE). Athenian democracy faded near the end of the fourth century BCE, though democratic principles lived on during the Roman Republic (509–27 BCE). After the end of the Republic, it was not until the formation of the first modern democracies that this type of government returned. The Corsican Republic became the first modern democracy in 1755 when it adopted a national constitution based on democratic principles. Though it ceased to exist in 1769, it was followed shortly after by the founding of the United States of America (1776) and the French First Republic (1792).
“Elections belong to the people. It’s their decision.”
Abraham Lincoln, U.S. president 1861–65
Prior to democracies, only a select few had the ability to direct the politics and actions of most states. Governments ruled by nobles, the wealthy, clerics, or other small groups of people concentrated their power, while those over whom they ruled had little say or influence over state decisions. With democracy, all eligible members of the community could vote and have at least a small influence on its direction. Since its reintroduction in the late eighteenth century, democracy has continued to spread around the world. MT
c. 500 BCE
Eternal Flux
Heraclitus
The concept that the world is in a constant state of change
Eternal flux is a state in which there is always change and nothing remains the same. The first exposition of eternal flux is traditionally attributed to the ancient Greek philosopher Heraclitus (c. 540–480 BCE). Heraclitus was nicknamed “the obscure,” and only fragments of his work survive, so it is difficult to be sure about what he had in mind. According to a traditional reading, Heraclitus embraced a radical form of eternal flux: everything is changing at every time. In Plato’s dialogue Cratylus (c. 360 BCE), Socrates says, “Heraclitus, I believe, says that all things pass and nothing stays … he says you could not step twice into the same river.” In response, he argued that if the objects of knowledge are constantly changing, then knowledge is impossible. Modern scholars are inclined, though not unanimously, to read Heraclitus as embracing eternal flux in a less radical form: allowing that there is stability but insisting that there is flux underneath it. A river is the same river moment from moment because it is composed of flowing water.
“Everything is in a state of flux, including the status quo.”
Robert Byrne, author
In its radical form, the idea of eternal flux was directly influential on Plato (c. 428–c. 348 BCE), who regarded the ordinary world as in flux and whose Theory of Forms was intended to locate the objects of knowledge in a changeless realm. The idea of a world in constant change recurs throughout the history of philosophy, and the obscure Heraclitean fragments lend themselves to various different uses. Notable philosophers who have expressed a debt to Heraclitus’s idea of eternal flux include G. W. F. Hegel, Friedrich Nietzsche, and Alfred North Whitehead. GB
c. 500 BCE
Rational / Irrational Numbers
Ancient Greece
The distinction between numbers that can be written as fractions and those that cannot
The Latin word ratio comes from the Greek logos, meaning “word” or “reason.” But the Latin concept of ratio, a relationship between two numbers of the same kind, is better thought of in terms of “proportion.” Ratios are most often expressed as fractions (1/2; 3/4) or as relationships of scale (1:2; 1:120). A rational number is a number that can be written as the ratio of whole numbers. It is either the quotient of the division of two integers of which the denominator is not zero (2 is the quotient of 2/1 and 6/3, for example) or the fraction made by two integers (1/3; 2/5). Every whole integer (-1; 3) or finite or repeating decimal expansion (.5; .3333 …) is a rational number.
However, not every number is rational, and this discovery troubled ancient mathematicians. The Pythagoreans of the early fifth century BCE were convinced that numbers are divine and that God and his creation must be complete and intelligible. The possibility of √2 and π, whose decimal places expand infinitely and in nonrepeating patterns, led to scandal. The Greeks called these numbers a-logos (not reasonable); Persian mathematician Al-Khowârizmî (780–850) later called ratios either “audible” (expressible) or “inaudible” (inexpressible).
“God made the integers; all else is the work of man.”
Leopold Kronecker, mathematician
However controversial, these numbers were vital to mathematics and became highly important in discussions of infinity and calculus. The term “irrational” can be traced firmly to 1585, although some cite an earlier date. The key developments, however, occurred in nineteenth-century transfinite mathematics. Richard Dedekind (1831–1916), for example, developed a way of identifying “real” (irrational) numbers, known as the “Dedekind cut.” JW
c. 500 BCE
Confucian Virtues
Confucius
The ideal society is developed through the cultivation of five moral virtues
A portrait of the Chinese philosopher Confucius, painted with gouache on paper in c. 1770.
Confucianism is a Chinese philosophical tradition that was founded in the sixth century BCE. The chaos of the Zhou dynasty (1122–256 BCE) made Confucius (551–479 BCE) wonder how China could return to the halcyon days of the Shang dynasty (1766–1122 BCE). He argued that moral virtue was the foundation of a good state.
There are many Confucian virtues, but five are typically considered essential: ren, yi, li, xiao, and xin. Ren translates as “benevolence” or “humaneness,” and refers to the moral sentiments that make us sensitive to the distress of others and to our moral responsibilities. Yi refers to the general ethical principles that guide virtuous action and help resolve moral disputes. Li concerns the observance of the customs and rituals that facilitate harmonious social interaction. Xiao promotes filial piety, a respect for the hierarchical relationships that govern human life. Xin is the virtue of honesty, fidelity, integrity, or trustworthiness. A person should cultivate these five virtues with the goal of becoming a junzi, or exemplary person. If a state is ruled by exemplary persons and populated by citizens who strive to cultivate the virtues, it will be harmonious and prosperous. Rulers should lead by example and educate their citizens so that harsh laws are unnecessary.
“The excellence required to hit the mark in the everyday is of the highest order.”
Confucius, Analects 6.29
Confucius is arguably the most influential Chinese thinker of all time. His political philosophy dominated Chinese government until the Cultural Revolution of the twentieth century. Neo-Confucian thought also heavily influenced Japanese philosophy and politics during the samurai era. Today, the Confucian virtue ethic continues to be pervasive throughout East Asia. JM
c. 500 BCE
Innatism
Ancient Greece
The mind is born with knowledge/ideas, rather than beginning as a “blank slate”
How do we come to know objects as independent (distinct from our minds) and discrete (distinct from one another) entities? One explanation is that we perceive objects as they impress themselves onto our senses. Atomists such as Democritus (c. 460–c. 370 BCE) claimed that objects are clusters of tiny particles, and we perceive them because other atoms bounce from them onto our senses. Another explanation is innatism, that ideas of objects exist in our minds when we are born and are later revealed through reason and experience.
Innatism can be traced to the Pythagoreans in the late sixth century BCE, and Socrates (c. 470–399 BCE) defended the idea by demonstrating that an uneducated child can use the Pythagorean theorem if asked the right questions. In Meditations on First Philosophy (1641), René Descartes (1596–1650) used wax to show that our senses relay only ideas of properties, and not ideas of objects. After holding wax to a flame, we come to experience different sets of properties at different times; it is no longer the same shape, size, or temperature as before. Yet we still say it is wax—even the same wax!
“We do not learn, and that what we call learning is only a process of recollection.”
Plato, Meno (fourth century BCE)
Innatism has been influential in education and psychology. For example, the “Socratic Method” of teaching developed from the belief that learning is a process of reminding students of their innate knowledge. Despite innatism’s popularity, in the seventeenth century, philosopher John Locke argued powerfully that it is more plausible to believe all of our ideas are imprinted on the “blank slate” of our minds by experience; psychologists now work largely under this assumption. JW
c. 500 BCE
Intellectual Property
Ancient Greece
Intangible assets owned by their creator in the same way as they own physical property
Intellectual property relates to intangible assets that people create or invent. The concept has existed since classical times when, in about 500 BCE, Greek chefs in southern Italy obtained the exclusive right to prepare a popular dish. Yet it was not until 1474 that the Venetian Republic adopted an intellectual property law that contained most of the provisions found in modern statutes. In addition to recognizing the rights of an inventor, the law provided penalties for infringement, gave incentives for inventors to seek patents, and imposed a time limit on the length of any patent granted. In 1867, the North German Confederation enacted a Constitution that first used the term “intellectual property.”
With the creation of intellectual property, the notion of what property is transcended the material or tangible. People could own thoughts, ideas, creations, or artistic works. By extension, providing a way to protect creative endeavors allowed people to pursue them with the knowledge that if their efforts bore fruit, they would reap the benefits in the same manner as if they had toiled at a physical creation. At the same time, limitations on intellectual property rights balanced the creator’s right to benefit solely against society’s interest in benefiting from a new creation.
Modern intellectual property recognizes several different types, including copyrights for written or artistic works, patents for inventions, trademarks for slogans or symbols, and trade secrets for processes or business practices. In 1967 the World Intellectual Property Organization was established to promote the worldwide protection of intellectual property. It became a specialized agency of the United Nations in December 1974, and today counts more than 180 countries among its members. MT
c. 500 BCE
Roman Gods
Ancient Rome
The gods and goddesses worshipped by the people of ancient Rome, each of whom oversaw a different aspect of human life
A second-century statue of the Roman god Jupiter. Jupiter was the chief deity in Roman mythology, and was also the god of sky and thunder.
Ancient Rome, from its origins as a republic to its growth as an empire, existed for nearly 1,000 years, from around 500 BCE until the fall of the western empire in 476 CE. During that time the ancient Romans worshipped a variety of gods, many of whom were incorporated from the religions of cultures that they had conquered.
“It is convenient there be gods, and, as it is convenient, let us believe that there are.”
Ovid, Roman poet
The Romans believed in a number of different gods, some taken from early Latin tribes and the Etruscan civilization, but many adopted from the ancient Greek colonies of southern Italy. In their adoption of the Greek pantheon, the Romans replaced the Greek names of the gods with Latin ones, for example renaming the Greek messenger god Hermes as the Roman god Mercury, and the Greek god Zeus as the Roman Jupiter, or Jove.
The Romans had a polytheistic religion, with multiple gods that were each responsible for different powers or purposes: Mars was the god of war and Venus was the goddess of love and beauty, for example. The gods were believed to be humanistic in their form, and were described in Roman mythology as being related to each other as a family.
For the Romans, the gods had a very practical, daily impact on their lives. A sea voyage might necessitate honoring the god of the ocean, Neptune, while moving to a new home or preparing for the birth of a child might require sacrifices or rituals to any number of other gods. This day-to-day, almost businesslike, interaction between an individual and the divinity was not so much an exercise in spirituality as much as it was an attempt to control or influence the divine forces in daily life. When Christianity appeared in the Empire in the first century CE, it required a very different set of beliefs and assumptions about the nature of humanity’s relationship with the divine, a belief system that the Romans initially greeted with skepticism and overt hostility. MT
c. 500 BCE
The Golden Ratio
Unknown
A number that is inherent in the formation of aesthetically pleasing art or architecture
The Parthenon (built in 447–438 BCE) in Athens, Greece, was designed according to the golden ratio.
The golden ratio is an irrational number that has fascinated physicists, mathematicians, architects, and philosophers ever since its discovery. An irrational number is a number that possesses an infinite number of decimal points and is not repetitive. The number—1.61803398874989484820—was used by the ancient Greeks as the desired ratio of length in relation to width, and provided the foundational formula for the construction of much of that civilization’s architecture, such as the Parthenon (447–438 BCE).
The use of the ratio most probably predated ancient Greece, with some scholars suggesting that the ancient Egyptian builders of the pyramids at Giza (2520–2472 BCE) used it to determine the final dimensions of the pyramids. However, it was not until the work of ancient Greek mathematicians in around 500 BCE that the golden ratio was first formally described and studied.
As a number, the golden ratio is usually rounded down to 1.618. Expressed simply, it is achieved if a line is divided into two parts at the point where the longer of the two parts, divided by the smaller part, is equal to the entire length divided by the longer part. For the Greeks, the golden ratio was also the basis of the “golden rectangle,” the most aesthetically pleasing of all possible rectangles. This rectangle was given the name “phi” in honor of the Greek sculptor Phidias (c. 480–430 BCE), whose work was greatly influenced by application of the irrational number.
Leonardo da Vinci referred to the golden ratio as the “divine proportion,” and applied its principles in some of his paintings, including The Last Supper (1495–98). There seems to be no end to its applications. Even in the twenty-first century, plastic surgeons reportedly use it as a template for creating perfect, symmetrical faces, much as Leonardo himself was said to have done in order to construct the beguiling face of Lisa Gherardini, model for the Mona Lisa (1503–06). BS
c. 500 BCE
Religious Pluralism
Ancient Rome
Tolerance toward different religious beliefs and practices
In general terms, religious pluralism is an attitude of tolerance toward any and all religions in the world. In political terms, religious pluralism is a state structure permissive of any and all religious beliefs and practices so long as they do not come into conflict with the laws governing the said state. Religious pluralism is enshrined in the First Amendment to the Constitution of the United States (1791), in the Canadian Charter of Rights and Freedoms (1982), and in the United Nations Universal Declaration of Human Rights (1948).
Ancient Rome (c. 500 BCE–c. 500 CE) is often cited as one of the first states that embraced religious pluralism. Being a polytheistic society, Rome was already predisposed to religious pluralism. Although its rules and procedures for governance and jurisprudence inculcated various tenets of a traditional religion, alternative religions were also tolerated, except in cases in which their practice was perceived as an act of rebellion.
Religious pluralism also existed in Spain during the Umayyad dynasty. After escaping the political and religious turmoil of Damascus in 750 CE, ‘Abd al-Rahmān I (731–788) governed the Muslim occupied regions of Spain. Despite occasional clashes between Christians and Muslims, Jewish and Christian populations enjoyed most of the same rights and freedoms as did the Muslim population during al-Rahmān’s rule, and throughout the reign of the ensuing dynasty.
Contemporary theorist John Hick (1922–2012) was a strong advocate of religious pluralism. He argued that all truth claims concerning God relate not to the god itself but to one’s subjective experience of God. According to Hick, the world is religiously ambiguous; it can be experienced either religiously or nonreligiously and according to any number of culturally conditioned religious traditions. The world offers no positive support for any one religion. For this reason, all religions should be tolerated if not embraced. DM
c. 500 BCE
Demons
Ancient Greece
Spirits that act in the world as a force of evil and malevolence
A fifteenth-century engraving by Martin Schongauer, showing St. Antony being tormented by demons.
A demon is a form of spiritual being that is found in most of the world’s religions and is commonly associated with an evil force such as the Devil. The origins of this idea can be traced back to ancient Greece, although the original Greek word for “demon,” daimon, did not carry with it the negative connotations that it has today. The word daimon was instead used to describe spirits of both a malevolent and benevolent nature. However, the subsequent adaptation of the idea by the early Abrahamic religions stripped it of this neutral status, and thus the modern word “demon” refers exclusively to evil or malevolent spirits.
The demons of different religions arise in a variety of forms, often with names designating their distinctive features, such as the Se’irim, or “hairy beings,” of the Hebrew Bible. In many religious traditions, demons are believed to be capable of “possessing” a person, place, or object. This has given rise throughout history to the practice of exorcism, whereby a demonic presence is expelled from the person or place it is inhabiting through a series of rituals performed by a priest or other spiritually qualified person.
“Never trust a demon. He has a hundred motives for anything he does …”
Neil Gaiman, The Sandman, Vol.1 (1988)
Exorcism is still an official practice of the Catholic Church today, and belief in demons continues to be a feature of many religious doctrines across the world. An exception to this is the Baha’i Faith, which treats the development of “demonic” traits in people as a psychological manifestation of their personal failure to follow God. Traditions of the occult such as Wicca, on the other hand, treat demons as real beings but see them as possible to manipulate and control. LWa
c. 480 BCE
Eternal Stasis
Parmenides
The idea of a world in which nothing ever changes
Eternal stasis is a state in which there never was, never is, and never will be change. The first exposition of eternal stasis was a work of verse called On Nature, by the early fifth-century BCE Greek philosopher Parmenides of Elea. Parmenides argued for a form of monism, the idea that there is only one thing. And what exists, he argued, is unique, ungenerated, indestructible, and incapable of motion or change. Multiplicity and change are illusions.
Parmenides changed the course of Greek cosmology. Before him, the reality of change was assumed; after him, it was a challenge. A few philosophers agreed with him, with Zeno (c. 495–430 BCE) notably advancing subtle arguments to show the impossibility of change. Other philosophers rejected his monism: by allowing multiplicity of elements (as with Empedocles) or atoms (as with Democritus), they also allowed change. However, the Parmenidean idea that the changeless is superior to the changeable persisted. It was reflected in Plato’s (c. 428–c. 348 BCE) influential Theory of Forms, according to which objects in the changing world are imperfect approximations of the changeless “Forms.” It was also reflected in astronomy, in which the heavens were regarded as changeless until the astronomical revolution of Copernicus, Kepler, and Galileo in the sixteenth and seventeenth centuries.
“What Is is ungenerated and deathless, whole and uniform, and still and perfect.”
Parmenides of Elea, On Nature (c. 480 BCE)
In philosophy, arguments against the reality of change resurged briefly around the turn of the twentieth century in the work of the British idealists F. H. Bradley and John McTaggart. The idea of a superior realm of the changeless persists, typically in connection with theology, mathematics, and ethics. GB
c. 480 BCE
Outer Space
Leucippus
The physical universe beyond the Earth’s atmosphere
An illustration from Otto von Guericke’s Experimenta Nova Magdeburgica de Vacuo Spatio (1672), which included details of his experiments with vacuums.
Outer space is the empty (or virtually empty) space that exists between astronomical bodies; on Earth, it is conventionally regarded as beginning 62 miles (100 km) above sea level. Probably the first thinker to conceive of astronomical bodies as separated by empty space was the ancient Greek philosopher Leucippus (fl. early fifth century BCE). The originator of the atomic theory, Leucippus held that all that exists is atoms and the void. His theory stated that the Earth was formed when a group of atoms in the void became isolated, whirled around one another, and separated by shape and size, with the heavier forming a spherical mass. The process was not considered unique, and the atomists argued that there were innumerable such worlds, separated by the void.
“Space is only 80 miles from every person on Earth—far closer than most people are to their own national capitals.”
Daniel Deudney, Space: The High Frontier in Perspective (1982)
The idea of such a vacuum was problematic in ancient Greek philosophy, however, and in the fourth century BCE Aristotle (384–322 BCE) influentially argued against its possibility, contending that it was plagued by inconsistency and absurdity and was unhelpful in explaining physical phenomena. His arguments were taken as decisive throughout the Middle Ages. In the Renaissance, however, thanks to the rediscovery of such texts as the Roman atomist Lucretius’s De Rerum Natura (c. 50 BCE) and to experiments carried out by early physicists, the idea began to be taken seriously again. In 1672, Otto von Guericke argued for the reality of vacuum and explicitly contended that “beyond or above the region of air, there begins a pure space void of every body.”
As astronomical discoveries progressed, the estimated size of outer space steadily increased. And exploration of outer space began in the 1950s, with humans traversing the void and landing on the moon in 1969. Such scientific and technological advances have brought with them international laws to ensure that human use of outer space remains peaceful. GB
c. 480 BCE
Nothingness
Parmenides
The concept of something that does not exist
A seventeenth-century engraving of the Greek philosopher Parmenides. Parmenides is best known for his metaphysical and cosmological poem On Nature (c. 480 BCE).
Nothingness—the property or state of being nothing—is anything but a simple idea. The ancient Greek philosopher Parmenides (fl. early fifth century BCE) was the first to introduce the idea of nothingness, only to reject it as unthinkable. On that basis, he reached various striking conclusions, particularly monism: the idea that there is only one thing.
“King Lear: Speak.
Cordelia: Nothing, my lord.
King Lear: Nothing!
Cordelia: Nothing.
King Lear: Nothing will come of nothing: speak again.”
William Shakespeare, King Lear (1606)
After Parmenides, the idea of nothingness—and cognate ideas, such as the void—took several turns. In physics, although the ancient atomists held that the world consisted of atoms in a void, Aristotle (384–322 BCE) argued for the impossibility of a void, and his views were accepted until the idea of empty space was rehabilitated in the Scientific Revolution (c. 1550–1700). Whether empty space counts as nothing according to contemporary physics is debatable: even when there is no matter in a region of space, it will still have measurable properties. In Western theology, early Christian theologians developed the idea that God created the world ex nihilo, out of nothing, as opposed to from preexisting materials; in the eighteenth century, philosopher G. W. Leibniz (1646–1716) posed the question, “Why is there something instead of nothing?” and answered by invoking God. Nothingness was associated with feelings of insignificance and meaninglessness and the fear of death by existentialist philosophers in the twentieth century.
Calculating about nothing posed challenges, which were surmounted by the invention, in ninth-century India, of a positional notation including the number zero. Reasoning about nothing also posed challenges, which were at least partly surmounted by the realization that “nothing” is not a noun but a negative indefinite pronoun: to say “nothing exists” is not to ascribe existence to something named “nothing” but to deny existence to anything. GB
c. 450 BCE
The Fountain of Youth
Herodotus
A mythological water source with the power of granting eternal youth
The Fountain of Youth as envisaged in a mural (c. 1400) at Manta Castle near Saluzzo, Italy.
The Fountain of Youth is a mythical spring that is supposed to have the power of prolonging or restoring the youth of those who drink from or bathe in it. Myths of such a fountain are to be found in various cultures, particularly throughout the Middle East. The first recorded mention of it is from the ancient Greek historian Herodotus (c. 484–435 BCE), who recounted a claim that there was such a fountain in Ethiopia.
In the Middle Ages, stories about the Fountain of Youth circulated in the Islamic world and then spread to such European works as The Travels of Sir John Mandeville (c. 1356). In the sixteenth century, the Spanish historian Peter Martyr d’Anghiera, who wrote early accounts of the European exploration of the New World, reported a native story of a miraculous fountain on an island in the Gulf of Honduras, an inlet of the Caribbean Sea. While the explorer Juan Ponce de León was indeed given a charter to discover and settle a legendary island (Beniny or Beimeni), the popular idea that he sought the Fountain of Youth there seems to have been invented by the sixteenth-century historian Gonzalo Fernández de Oviedo, who maliciously added that Ponce de León hoped to cure his impotence. However, the story about his search for the Fountain persists as a historical myth.
“The only bath in the Fountain of Youth would be … to possess other eyes.”
Marcel Proust, Remembrance of Things Past (1913–27)
Few people take the story of the Fountain of Youth seriously today, but it remains a popular theme in literature and the arts (such as Darren Aronofsky’s film The Fountain, 2006). It is also inevitable as a metaphor in discussing the modern concerns of prolonging lifespan and reducing the effects of aging. GB
c. 45o BCE
Fatalism
Ancient Greece
The belief that events are predestined and nothing can alter their course
Fatalism, the belief that some events are destined to occur regardless of whatever else might occur, originated with thinkers in ancient Greece. A well-known example is the story of Oedipus from Sophocles’s ancient play Oedipus Rex (c. 429 BCE). In the play, Oedipus seeks revenge for the murder of his former king and his wife’s former husband, Laius, only to discover that Laius and his wife abandoned Oedipus as a child to escape an oracle that their son would kill his father and sleep with his mother. Yet, despite all their machinations, the prophecy had come true: Oedipus had killed his father and married his mother.
Fatalism is distinct from determinism. The latter is the view that every event is fully determined by prior events, that is, if some prior events had been different, later events would be different. Fatalism neither implies, nor is implied by, determinism.
“Your life must now run the course that’s been set for it.”
Kazuo Ishiguro, Never Let Me Go (2005)
The ancient Greek Stoic philosophy is often linked with fatalism, though it is unclear whether it is fatalistic or deterministic. Some Stoics suggested that the universe is organized according to a single divine purpose that will be realized irrespective of what humans intend. Others argued that perfect virtue is found through learning to be guided by nature and becoming free from passions. This emphasis on learning and becoming free suggests that some events are left to individual agents. Scholars debate whether this constitutes a conceptual problem for Stoicism or fatalism. Fatalism, especially in regard to moral attitudes and happiness, remains influential in contemporary thought, notably in military training. JW
c. 450 BCE
History
Herodotus
A branch of knowledge that records and explains past events
The title page of a Latin edition of Herodotus’s The Histories, printed by Gregorius de Gregoriis in Venice in 1494—over 2,000 years after it was originally written.
Before the invention of writing, history remained an entirely oral phenomenon. The first known written documents, and thus those marking the boundary between the prehistorical period and the historical period, come from ancient Sumeria in approximately 3200 BCE. It was not until much later, however, that the ancient Greeks first looked at history as a field of study, something that could be examined, weighed, evaluated, and used as a tool. Herodotus (c. 484–435 BCE) is widely credited as the first person to create a historical work when, sometime during the fifth century BCE, he wrote The Histories.
“History is a relentless master. It has no present, only the past rushing into the future.”
John F. Kennedy, U.S. president 1961–63
In The Histories, Herodotus gave an account of the wars between the Greeks and the Persian Empire from 499 BCE to 449 BCE; in particular he sought to understand the cause of the conflict by examining the wars, and the events leading up to them, from both sides. Herodotus traveled widely in the region and interviewed numerous people in the course of compiling his work, and his style of narration is very much that of a storyteller. The Histories is not entirely impartial and it was dismissed by later Greek thinkers as lies. Nonetheless, it provides an extensive insight into the cultures of the Mediterranean and Middle East during that period and is still used as a leading historical source.
Humankind has likely asked questions about the past since people first became capable of recognizing that the world is not a static reality. By viewing the past as something understandable, humans created a way of expanding their understanding of themselves and the world, and gained a broader view of the concept of time. History allowed people an opportunity to experience, though not firsthand, events that would otherwise be forever unknowable. The understanding that history provides often forms a lens through which people shape their expectations, judgments, emotions, and actions. MT
c. 450 BCE
I Know That I Do Not Know
Socrates
The argument that knowledge is never really acquired
A portrait statue of Socrates from c. 200 BCE–100 CE. According to his pupils Plato and Xenophon, Socrates was portly and pug-nosed with fleshy lips, resembling a satyr.
The well-known statement, “All I know is that I do not know,” is attributed—questionably, according to some scholars—to the ancient Greek philosopher Socrates (c. 470–399 BCE), based on two dialogues written by his disciple Plato (c. 424–c. 348 BCE). In The Republic (c. 360 BCE), Socrates concludes a discussion with Thrasymachus on “justice” by saying, “the result of the discussion, as far as I’m concerned, is that I know nothing, for when I don’t know what justice is, I’ll hardly know whether it is a kind of virtue or not, or whether a person who has it is happy or unhappy.” In The Apology (399 BCE), Socrates says of a well-respected politician that “he knows nothing, and thinks that he knows; I neither know nor think that I know.”
“The result of the discussion, as far as I’m concerned, is that I know nothing …”
Socrates, quoted in Plato’s The Republic (c. 360 BCE)
The resulting slogan was adopted by later thinkers and incorporated into the tradition that became known as “Academic Skepticism.” Rather than believing that it is impossible to know anything, Academic Skeptics actually claim only that we can know very little about reality—namely, truths of logic and mathematics. This contrasts with Pyrrhonian skepticism, which involves an attitude of doubting every positive judgment, including logic and mathematics.
A serious problem with Socrates’s statements is that he seems committed to an incoherent position. If he truly does not know anything, then it is false that he knows that; but if he does know he does not know anything, then it is false that he does not know anything. Thus, the claim “I know that I do not know” is self-defeating (resulting in the statement also being known as the Socratic paradox). In response, many scholars argue that this is an uncharitable reading of Plato. They contend that Socrates’s claims are expressed in a particular context, referring only to specific concepts and not to knowledge generally (“justice” in The Republic, and “beauty” and “goodness” in The Apology). JW
c. 450 BCE
The Socratic Method
Socrates
A teaching method that relies on continually asking questions
A nineteenth-century painting by Gustav Adolph Spangenberg, depicting Socrates and his disciples.
The Socratic Method is a pedagogical style named after its well-known exemplar. Unlike the great sophist orators of his time and the later Aristotelian and Scholastic teachers, who disseminated information through carefully planned lectures, Socrates (c. 470–399 BCE) engaged his audience individually and personally with a series of questions. These questions were designed to elicit a reflective and mostly skeptical perspective on various philosophical, political, and religious ideas. In a well-known case, depicted in Plato’s dialogue Meno (c. 380 BCE), Socrates used his method to “teach” an uneducated slave-boy a set of Euclidean propositions, including the Pythagorean theorem. The central assumption underlying Socrates’s approach is that knowledge is innate—we do not acquire new information, instead education reminds us of what we already know.
Socrates’s Method was overshadowed in the Middle Ages by the popularity of classical orators such as Aristotle and Cicero, leading to an increase of lecture-centered pedagogy known as the Scholastic Model (also called the “banking model,” because it assumes knowledge can be “deposited” in a student as money in a bank). A further setback came in the seventeenth century with the rise in prominence of empiricism, the view that we come to the world as “blank slates” and must obtain knowledge through experience. Empiricism implies that innatism is mistaken, and thus challenges the pedagogy based on it.
The question of the effectiveness of the Socratic Method still receives attention from education researchers. Some contend that it constrains learning and fosters aggression. Others respond that, as with all teaching styles, the Socratic Method can be abused, but when used well it can be effective. It is still frequently applied in law schools, as memorably portrayed in the movie The Paper Chase (1973). JW
c. 450 BCE
The Four Elements
Empedocles
The theory that the universe is built out of four natural elements
A fifteenth-century illustration of Christ surrounded by the four elements.
In his poem On Nature (c. 450 BCE), Greek poet Empedocles (c. 490–430 BCE) called upon a set of gods to represent the elements of his own cosmology. The notion that everything in existence is composed of earth, air, fire, and water, or a combination of these four elements, was borrowed from the ancient Babylonian creation myth, Enuma Elish (c. 1800 BCE), in which the universe emerges from conflicts between gods, each of whom represent some element or force of nature.
Empedocles was seeking what is now often referred to as a “unified field theory,” a theory capable of providing the groundwork for the explanation of any given natural phenomenon. The strategy he inherited from his intellectual predecessors, such as Thales and Anaximenes (who were themselves influenced by the Babylonian myth), was to attempt to identify the most basic ingredient, or ingredients, of the universe. In the late sixth century BCE, Thales had believed that ingredient to be water. Later, Anaximenes argued that water was too fundamentally different from certain natural phenomena (like fire) for it to be the basic ingredient of the universe. Instead, he proposed that air was the basic ingredient. Empedocles, however, saw no way to explain the vast array of natural phenomena without introducing a total of four basic ingredients: earth, air, fire, and water. These elements were what Empedocles referred to as “the four roots.”
Aristotle (384–322 BCE) added a fifth element, aether. Medieval scholars learned of Empedocles’s notion of the four elements via Aristotle, and Empedocles’s cosmological theory dominated science until the seventeenth century. Although forms of atomism emerged as early as the fifth century BCE, it was not until the work of Sir Isaac Newton (1642–1727) and Robert Boyle (1627–91) gained a hold that the four elements were replaced by the atom (or something pretty close) as the foundation of the universe. DM
c. 450 BCE
Classicism
Ancient Greece
A work of art based on principles of symmetry, proportion, and order
Vitruvian Man (c. 1490) by Leonardo da Vinci. The drawing is accompanied by notes based on the work of the Roman architect Vitruvius, who drew heavily on the Classical style.
It is common to hear of something in the arts referred to as a “Classical antique” or in “Classical style.” The term “Classical” refers to an artwork created in the style of Greece in the fifth century BCE, and should evidence the principles of symmetry, proportion, and order.
“Classicism is not the manner of any fixed age or of any fixed country: it is a constant state of the artistic mind. It is a temper of security and satisfaction and patience.”
James Joyce, Ulysses (1922)
For example, after writing his Canon of Proportion in c. 450 BCE, the Greek sculptor Polykleitos (fl. fifth and early fourth century BCE) made a bronze sculpture, the Doryphorus, in which he used the human head as the module for the rest of the body’s measurements, following a ratio of 1:7 for the height. At much the same time, the architects of the Athenian Parthenon (447–438 BCE) used the mathematical golden ratio, or golden section, to create a system of modules, proportion, and optical corrections that enabled them to implement notions of harmony and perfection into its design. In 350 BCE the ancient Greek philosopher Aristotle (384–322 BCE), in his treatise the Poetics, described the structure and form of an ideal tragedy, using Sophocles’s play Oedipus Rex (c. 429 BCE) as his example. And in his Ten Books on Architecture (c. 15 BCE), the Roman writer Vitruvius (d. after c. 15 BCE) returned to these Classical Greek precedents to develop his influential taxonomy and pattern book for the art of building.
When the writers, artists, and architects of the Italian Renaissance (fourteenth to sixteenth century) looked around them, believing that “man is the measure of all things,” they were inspired by the monuments of Greece and Rome to abandon the soaring Gothic technologies of the Middle Ages and re-embrace the Classical style. Leonardo da Vinci’s drawing Vitruvian Man (c. 1490) was derived from a passage in Vitruvius’s book on geometry and human proportions; Brunelleschi’s dome (1439–61) for Florence Cathedral took the Roman Pantheon for inspiration; Michelangelo’s David (1501–04) harked back to Classical ideals. The ancient Classical style still sets a standard for artists and architects today. PBr
c. 450 BCE
Man Is the Measure of All Things
Protagoras
A way to argue from disagreement against objectivity
A portrait of the ancient Greek philosopher Protagoras, painted by Jusepe de Ribera in 1637. Ribera’s artistic style was heavily influenced by the work of Caravaggio.
“Man is the measure of all things” is a slogan that plays an important role in a particular style of argument against objective knowledge. The statement is attributed to the ancient Greek philosopher Protagoras (c. 490–420 BCE), a predecessor of Plato and the most prominent of the Sophists (a group of ancient Greek intellectuals); only fragments of his writings survive.
“Since then nothing is grasped apart from circumstances, each person must be trusted regarding those received in his own circumstances.”
Sextus Empiricus, Adversus mathematicos VII:62–63 (second or third century CE)
In Plato’s dialogue Theaetetus (c. 360 BCE), Protagoras is represented as purporting that the truth of claims of perception is relative to the perceiver. Suppose that two people disagree about whether the wind is hot. It feels hot to person A and cold to person B. This is where the “man is the measure of all things” phrase applies: there is no way for the two to transcend their perceptions and check their judgments against reality. The best that can be said is that the wind is hot to person A and cold to person B; there is no saying that the wind is hot or cold without referring to who is perceiving it. The disagreement, therefore, is only apparent.
The resulting doctrine of relativism—the theory that knowledge is subjective according to differences in perception and consideration—also extended to claims about moral and aesthetic qualities, such as justice and beauty. While there is no evidence that Protagoras himself thought relativism was true across the board, Socrates, and later Aristotle, reasoned that his arguments committed him to thinking so and thus were self-refuting.
Similar arguments, from disagreement via the “man is the measure of all things” slogan to various forms of relativism, subjectivism, and skepticism, applied to a dizzying variety of topics, recur throughout the history of philosophy and have been the subject of spirited debate. Outside philosophy, the slogan is also used in connection with the Renaissance’s reorientation from the divine to the human. GB
c. 450 BCE
Sophism
Ancient Greece
A school of philosophy associated with moral skepticism and specious reasoning
The term “sophism” has changed greatly throughout history. In ancient Greece “sophist” was first used to refer to wise men, such as writers Homer and Hesiod. Around the fifth century BCE, sophists were depicted as untrustworthy rhetoricians, typically politicians and orators, who applied philosophical reasoning solely to practical matters. Well-known sophists of this era include Protagoras, Gorgias, and Thrasymachus. These sophists rejected the abstract theorizing of the pre-Socratics, and embraced skepticism about traditional philosophical topics. They argued that the human mind is suited only for practical skills, such as politics, public entertainment, and commerce. This practical emphasis made them effective in legal and political proceedings, and they accepted fees to teach their methods.
“Some men weave their sophistry till their own reason is entangled.”
Samuel Johnson, writer
Aristophanes ridiculed sophists, regarding them as shysters and braggarts, but his rendering is unreliable because he regarded Socrates as a sophist. Socrates, while he acknowledged the sophists’ practical skill, criticized them for invoking groundless distinctions and constructing faulty arguments.
In the Middle Ages, the term “sophism” inherited a more neutral dimension as “sophisma,” referring to a puzzling claim needing clarification or disambiguation. Although largely retaining its negative connotation, writings called sophismata flourished at this time, analyzing conceptually problematic claims. Well-known authors of sophismata include Thomas Bradwardine and Richard Kilvington. Despite this brief respite, today sophism retains its predominantly negative connotation, referring to faulty, deceitful reasoning. JW
c. 450 BCE
Existential Nihilism
Empedocles
There is no meaning or purpose to life and all existence
Existential nihilism posits that life has no intrinsic value or meaning. Life is simply one phenomenon of many in a universe governed by arbitrary and impartial laws. No one phenomenon is more meaningful or valuable than any other. Thus existential nihilism is an explicit rejection of the sanctity and superiority of life and any value system that is derived therefrom.
The earliest expression of existential nihilism is in the poetry of Empedocles (c. 490–430 BCE). However, its most thorough articulations are in works by Friedrich Nietzsche (1844–1900), Søren Kierkegaard (1813–55), and Jean-Paul Sartre (1905–80). Nietzsche was highly critical of the paradigms and institutions from which moral values are drawn. He berated philosophers for idolizing “Reason,” denounced religion, and declared God dead. These attacks on conventional foundations of morality reinforce his underlying argument that constant reassessment of the basis of our values is necessary to avoid subverting humanity’s creativity.
“The life of mortals is so mean a thing as to be virtually un-life.”
Empedocles
Sartre argued that because life has no intrinsic meaning, each individual is free to assert and affirm their own meaning through their actions. He declared “existence precedes essence,” meaning that life has no meaning until a person acts in a way that bestows meaning upon themself. Furthermore, it is immoral to live life in ignorance of the existential ramifications of one’s actions, and especially immoral to do so willingly.
Existential nihilism has left a lasting impression on Western culture. It reinforces the individualism and moral relativism that have become prevalent, especially in North American culture. DM
c. 450 BCE
Moral Relativism
Ancient Greece
Objective truth does not exist; all truths depend on opinion
Moral relativism originated with the ancient Greeks in the fifth century BCE. It maintains that there is no objective moral truth and thus that all moral truths depend on the opinion of individuals. Broadly speaking, there are three versions of moral relativism: subjectivism, conventionalism, and divine command theory.
Subjectivism says morality is relative to the opinion of particular human persons. It is often confused with aesthetic preferences (“You like blondes; I like brunettes”) and is thought—popularly but wrongly—to be the only way to negotiate tough ethical dilemmas.
Conventionalism says morality is relative to the opinion of a certain group. In its pragmatic form, it says the right thing to do is what keeps society stable. In its utilitarian form, it says the right thing to do is what maximizes happiness for the most people. It is a type of moral relativism as its principles are socially constructed, not objectively grounded.
“It is … the abdication of reason that is the source of moral relativism.”
Peter Kreeft, professor of philosophy
Divine command theory says morality is relative to the opinion of a particular god or group of gods. What distinguishes divine command theory from objective moral theories involving God (such as Natural Law theory) is that in divine command theory the god is not identical in nature to objective moral goodness.
Moral relativism usually results from a confusion between the undisputed fact of cultural relativism (different cultures often express different values) and the non sequitur that moral truths themselves are, therefore, relative. Whatever else we might think of moral relativism, its ancient roots and modern followers (especially in the West) make it an important idea. AB
c. 450 BCE
Zeno’s Arrow
Zeno of Elea
A Greek philosopher demonstrates that movement is an illusion
A pupil of the ancient Greek philosopher Parmenides, Zeno of Elea (c. 495–430 BCE) lived in a Greek colony in Italy. Although his ideas are known to us only through comments in the works of later Greek thinkers, Zeno is well-known for his teasing paradoxes.
The paradox of the arrow is a thought experiment that challenges commonsense notions of time and movement. When an arrow is shot from a bow, it appears obvious that the flying arrow moves. But Zeno denies it is ever in motion. He invites us to look at the arrow at any moment during its flight—as it were, freezing the frame. We will see that, in that instant, the arrow is at rest, statically occupying its place in space, no more no less. But the flight of the arrow is simply a succession of instants. Since in every single instant the arrow is immobile, there is never an instant when it is in motion. Our impression of the arrow moving is therefore an illusion, according to Zeno.
“Zeno’s arguments … cause much disquietude to those who try to solve them.”
Aristotle, Physics (c. 330 BCE)
The implications of Zeno’s vision are radical. If time is a series of static, unconnected moments, then in reality the world is unchanging and eternal, without past or future. A sensible person is, of course, loath to accept such notions. One solution to Zeno’s arrow paradox seemed obvious to Aristotle. He argued that there were no “instants” of time. As he put it, “Time is not composed of indivisible moments.” In other words, time flows continuously, like a stream, from the past into the future, freeing the arrow to fly to its target. But British philosopher Bertrand Russell, in the early twentieth century, accepted Zeno’s arrow paradox as “a very plain statement of a very elementary fact.” RG
c. 450 BCE
The Art of War
Sunzi
Knowledge, discipline, and deception enable you to win any battle
A porcelain dish from China’s Kangxi period (1654–1722), decorated with a painting of a battle scene.
The Sunzi Bing Fa (The Art of War) was probably written during the transition between China’s Spring and Autumn Period (722–481 BCE) and its Warring States Period (403–211 BCE). It is a philosophical treatise on how to manage conflict and win battles, believed to be by Chinese military general Sunzi (fl. fifth century BCE).
Since war necessitates the loss of lives and resources, every war is a defeat. The greatest victory is to defeat the enemy without ever having to meet them on the battlefield. War should be a last resort, and all peaceable remedies must be exhausted before resorting to violence. However, once a commitment has been made to violence, the goal is to achieve victory as quickly as possible at the minimum cost. A good commander does not resist change, but rather flows with the dynamic nature of the situation and turns it to his advantage. This is best achieved through knowledge of the terrain, which allows a commander to place his troops strategically in a position from which he is able to use his forces efficiently against his opponent’s weak spots. A commander must know his enemy by gaining as much intelligence as possible before committing to battle. Warfare is ultimately the art of deception: discipline and knowledge will allow a commander to trick his opponents into exposing their weaknesses.
“The supreme art of war is to subdue the enemy without fighting.”
Sunzi, The Art of War
The Art of War is one of the most influential works on warfare ever written. It is required reading for officer candidates in many East Asian countries and is recommended to U.S. military personnel by the Army and Marine Corps. Business executives, lawyers, and coaches frequently use it as a strategy guide. JM
c. 450 BCE
Casuistry
Greek Sophists
A form of reasoning that builds upon earlier precedents
Probably originating among ancient Greek teachers of rhetoric and persuasion in the mid fifth century BCE known as Sophists, casuistry, or case-based reasoning, allows a person to see new, untried arguments as analogous to older, successful ones that have set a precedent for ethical decision making. It forms the basis of English common law, which shapes the jurisprudence in many Western nations today. The idea that the law develops incrementally through social conventions is fundamental to this system of moral reasoning.
Religious groups, such as the Jesuits, have also used casuistry as the basis for their examination of individual moral questions, searching for the best precedent in the scriptures dictating moral and social codes. This type of paradigm setting is also used in the medical and bioethical fields, in which doctors build their system of practice on the success or failure of previous therapies and pharmacology.
“Cages for gnats, and chains to yoke a flea, Dried butterflies, and tomes of casuistry.”
Alexander Pope, The Rape of the Lock (1712)
In journalism, the term “casuistry” is usually used pejoratively—it is seen as either overgeneralization or a too subtle interpretation of general principles to defend a specific case. The special circumstances of a case may render the analogy to a previous one specious and even damaging.
The danger of biased or poorly made original cases or lack of judgment on the part of the court of public opinion is always a threat to the validity of casuistic moral reasoning. At its worst, it is considered to be the type of moral relativism and situational ethics that allows immoral acts to be used as precedent for further bad actions. PBr
c. 450 BCE
Divine Command
Socrates
The idea that our moral obligations come externally from God rather than ourselves
A painting (1911) of the twenty-four elders of the Bible bowing before God, by Henry John Stock.
The theory of Divine Command, which predates Christianity, is the idea that all our ethical and moral questions, actions, and concepts are fundamentally dependent upon God, who is assumed to be the originator of all goodness and morality. Thus morality is wholly based upon God’s commands, and the degree to which we behave morally is inexorably linked to how faithfully we have followed God’s commands in response to any given situation.
The idea of Divine Command has been criticized and debated by philosophers and intellectuals since Socrates (c. 470–399 BCE) asked his famous question: “Is the pious loved by the gods because it is pious, or is it pious because it is loved by the gods?” In other words, does God command an action because it is right, or is it right because He commands it? Philosophers such as Immanuel Kant (1724–1804) say it is in our own best interests to believe in God, and therefore in the morality that comes from faith, because to try and live with the weight of morality’s complexities would be too much for anyone to bear alone. But the theory does contain an obvious dilemma: if God commands us to hurt someone or inflict suffering, then to do so would be the morally correct thing to do. Morality suddenly becomes an arbitrary thing, just as it does in the secular world when it is reduced to obeying unquestioningly the dictates of an authority, and proponents of Divine Command theory thus find themselves facing the proposition that cruelty might be morally permissible, indeed necessary, if it is pleasing to God.
Perhaps it is the argument of the Dominican priest and theologian Thomas Aquinas (1225–74) that offers the best hope of a workable basis for morality. God created us, he wrote, in possession of a likeness of His own inner nature, and that by listening to our inner natures we are able to seek that Narrow Path which helps subdue the immoral “devil on our shoulder.” JS
c. 425 BCE
Satire
Aristophanes
The use of entertaining ridicule to achieve social or political ends
An illustration from an ancient Egyptian papyrus (c. 1100 BCE) parodying scenes of human activity.
Using humor to expose the failings of others likely dates back to the origin of humor itself. Ancient Egyptian documents contain examples of writing and art that ridicule others in society. But as a literary form, satire refers to the deployment of parody, sarcasm, or irony for the purpose of exposing failings and flaws of individuals or society. Satire often has the social or political purpose of changing its target’s behavior.
The Greek playwright Aristophanes (c. 450–c. 388 BCE) is one of the best-known satirists, and was credited by his contemporaries—friends and enemies—with the development of comedic satire into an art form. His first surviving play, The Acharnians (c. 425 BCE), is cited as the earliest example of satirical comedy.
Beginning with Aristophanes and moving to the early satirists of the Roman Empire, satire emerged as a powerful political and social tool. Usually produced for mass consumption, satire presented political arguments in the form of entertainment, and was just as effective at influencing public opinion as any other kind of political argument. Two early Roman satirists, Horace (65–8 BCE) and Juvenal (c. 55–c. 127 CE), lend their names to a basic categorization of two types of satire. Horatian satire refers to a mild and humorous form that typically aims to expose absurdity or ridiculousness. Juvenalian satire is a more aggressive form that has less emphasis on humor and more on communicating outrage over perceived injustice or immorality.
Satire is one of the most common forms of political communication in modern politics and popular culture. Ridiculing political opponents is often an effective means of making a political or social point, and can create a lasting impression on an audience. Humorous satire offers entertainment and has become a cultural industry in print, television, and other media; examples include sketch comedies on television, and the increasingly popular “fake news” programs. TD
c. 400 BCE
Cynicism
Antisthenes
A school of philosophy that rejected personal ambition and possessions in favor of leading a simple existence
Diogenes (c. 1860), by the French artist Jean-Léon Gérôme. The dog with him is an emblem of his cynic philosophy, which emphasized an austere existence.
When we say that someone is cynical we are most often accusing that person of habitual suspicion of the claims and motivations of their fellow citizens. This colloquial meaning of the term “cynicism” most likely originates in the image of the ancient Greek philosopher Diogenes (d. c. 320 BCE) who, according to legend, lived as a vagabond and roamed the streets of Athens carrying a blazing lantern, even in daylight, hoping that it would help in his search for “one good man.” There is more to cynicism, however, than derogatory opinions concerning human nature.
“Cynic, n. A blackguard [who] sees things as they are, not as they ought to be.”
Ambrose Bierce, The Devil’s Dictionary (1906)
Cynicism is an ethical philosophy originating in the teachings of Antisthenes (c. 445–c. 365), one of the lesser-known students of Socrates. Cynicism teaches that the only true path to happiness is to abandon the trappings of social convention—such as the pursuit of wealth, conformity with etiquette, the desire for fame and power—and to live in agreement with nature. If, says the cynic, one abstains from making judgments based upon popular values, which are therefore false, one can achieve a state of arete, Greek for “virtue” or “excellence.” Cynicism is therefore not a school of thought in the traditional sense, rather it can be viewed more as a way of life.
The ancient cynics—in particular, Diogenes—are also credited with developing the concept of “cosmopolitanism.” Legend relates that, when asked about his political allegiances, Diogenes claimed to be a kosmopolites, or “a citizen of the world.” This notion of world citizenship would later be taken up by Immanuel Kant (1724–1804) and expanded by Martha Nussbaum (b. 1947). Nussbaum argues that legitimate norms of justice govern the relationships between all persons, and not merely the relationships that exist between those who live within arbitrary political regions. This interpretation derives from ancient cynicism. DM
c. 400 BCE
Humorism
Polybus
A system of medical diagnosis and treatment based on four bodily fluids
Humorism, or the theory of humors, revolves around the concept that physical health and mental temperament depend on the balance of four bodily fluids, or humors: black bile, yellow bile, phlegm, and blood. Polybus (fl. c. 400 BCE), the son-in-law of Hippocrates, the father of medicine, is sometimes credited with formulating the theory.
According to humorism, each of the four fluids has certain characteristics: blood and yellow bile are hot, yellow bile and black bile are dry, black bile and phlegm are cold, and phlegm and blood are wet. These correspondences were of use in both diagnosis and treatment to restore the balance of humors. For example, a dry fever might be attributed to a surplus of yellow bile and might be treated by inducing vomiting, decreasing the amount of yellow bile, or with cold baths, increasing the amount of phlegm (the cold, wet phlegm balancing out the hot, dry yellow bile). Further correspondences with various natural systems—the seasons, for example—furnished additional resources for the theory. The theory of humors also provided a theory of temperament, which is still familiar in the terms sanguine, choleric, melancholic, and phlegmatic.
Humorism was a dominant belief among physicians in the West for around 2,000 years, with proponents including the Roman physician Galen and the eleventh-century Islamic physician Avicenna. Unsurprisingly, the theory is reflected in art and literature as well, including in Michelangelo’s Sistine Chapel and the plays of William Shakespeare and Ben Jonson. With the rise of modern medicine in the seventeenth century, the physiological claims of humorism were gradually rejected. The fourfold categorization of temperament continued to be influential, however, with thinkers from Immanuel Kant to Hans Eysenck discerning a measure of truth in it. GB
c. 400 BCE
Atoms
Democritus
The fundamental pieces of the universe and the basic substance of all matter
Atoms make up everything there is. They are the tiniest possible pieces of matter and are found in everything. From this one elemental piece of primary matter, all gases, solids, liquids, and even light comes into existence. This understanding of the world was first expressed by ancient Greek philosopher Democritus (c. 460–c. 370 BCE). According to Democritus, the qualities of various types of atoms—differentiated by shape, weight, or size—explained the different properties that could be observed in different states of matter.
Democritus based his notions of atomic theory almost purely on conjecture and speculation, as he had no way of observing atomic phenomena. Similar ideas about atoms also arose in India around the same time, though it was Democritus who came up with the term “atoms.” However, it was not until the late eighteenth and early nineteenth century that scientific advances allowed researchers, such as John Dalton (1766–1844), to test Democritus’s atomic theory. The work of subsequent scientists led to the contemporary understanding of atoms, their component parts, and subatomic particles.
“Nothing exists except atoms and empty space; everything else is an opinion.”
Democritus
Even though atomic theory began as little more than philosophical musings, it eventually became a foundational concept of modern science. Not only did atomic theory allow for quantitative, empirical measurements of matter, it allowed researchers to make and test predictions based on their understanding of how matter was composed at the fundamental level, enabling the development of both theoretical and practical scientific applications. MT
c. 400 BCE
The End Justifies the Means
Ancient Greece
Achieving a desired outcome can justify prior immoral actions
A Mesopotamian victory stele from c. 2230 BCE, showing Naram-Sin, the king of Akkad, trampling his enemies.
Many philosophers argue that to do what is right it is necessary to have a morally good reason for acting. Typically, identifying morally good reasons involves appealing to abstract principles (for example, treat everyone as an end and not simply as a means; do unto others as you would have them do unto you). To determine a sufficient principle, some philosophers appeal to pure reason (universal moral truths), while others appeal to consequences of a certain type. One traditional appeal to consequences is based on preference: if outcome Y is preferred, then act X (that produces Y) is permissible; in other words, “the end justifies the means.” If the outcome of an act is desirable, the act is justified regardless of what it involves (even if it would seem immoral under other circumstances).
“[A] prince wishing to keep his state is very often forced to do evil …”
Niccolo Machiavelli, The Prince (1532)
The origin of the phrase “the end justifies the means” can be traced to the ancient literati, for example, the Greek playwright Sophocles (c. 496–406 BCE) and Roman poet Ovid (43 BCE–17 CE). Another well-known proponent is Italian statesman Niccolo Machiavelli (1469–1527). His work The Prince (1532) is a masterpiece of practical decision making, predicated on one goal: to maintain political position as long as possible. This end may require instances of lying, coercion, and killing, but, if the principle is sound, these acts are justified.
Notably, there is little reason to think desired outcomes have anything to do with moral goodness. For example, genocide and rape may be desired by some. Thus, as a moral principle it is widely considered implausible, and it is often invoked as a criticism of views that appear to imply it. JW
c. 400 BCE
Physician-assisted Suicide
Ancient Greece
The morality of a doctor aiding the death of a patient
Physician-assisted suicide occurs when a person is provided with the means by which to bring about their own death by a physician. An early expression of this idea can be found in the Hippocratic Oath of Greek antiquity composed around 400 BCE. New medical practitioners were asked to take the oath on completion of their medical training; contained within it was the following pledge: “I will neither give a deadly drug to anybody if asked for it, nor will I make a suggestion to this effect.” It is here then that we first encounter the idea of physician-assisted suicide as an issue lying at the heart of medical ethics.
“Do any of you here think it’s a crime to help a suffering human end his agony?”
Jack Kevorkian, physician
Physician-assisted suicide differs from euthanasia because the latter involves ending the life of another as opposed to supplying the means by which a person can end their own life. However, contemporary debates surrounding both practices often focus on the same issues concerning the value of life and the right to death. During the 1990s the issue of physician-assisted suicide entered the media spotlight when U.S physician Jack Kevorkian claimed to have assisted more than 130 patients to die. Kevorkian was tried four times before being convicted and sentenced to a maximum of twenty-five years in prison. However, the campaign that Kevorkian fronted for the legalisation of physician-assisted suicide is still ongoing and the practice is legal in three U.S. states and several countries in Northern Europe. Despite its ancient origins, it is clear that the issue of physician-assisted suicide continues to be both controversial and relevant in the modern world. LWa
c. 400 BCE
Perspective
Ancient Greece
A technique of depicting three-dimensional reality in two-dimensional space, providing the viewer of the image with a sense of depth
An oil painting of St. Jerome in his study by the Italian Renaissance artist Antonello da Messina, c. 1475, featuring a highly skilled use of perspective.
In art, perspective is a technique that allows an artist to create a realistic representation of a three-dimensional scene or object on a two-dimensional surface. Some of the earliest evidence of the use of perspective appears in the art of ancient Greece in approximately 400 BCE. There, the use of perspective was inspired by the production of theater sets, which incorporated panels set at staggered intervals in order to create a sense of depth on stage. The Greek philosophers Anaxagoras (c. 500–c. 428 BCE) and Democritus (c. 460–c. 370 BCE), along with the mathematician Euclid (fl. c. 300 BCE), drew from these a set of geometrical principles that governed perspective in two-dimensional space.
“No power on earth can keep the picture from being three-dimensional once the first line is drawn or color-plane applied.”
Sheldon Cheney, author and art critic
By the fourteenth century CE, the basic techniques associated with perspective, such as representing distant figures as smaller than near ones, were well known. However, the art of perspective developed dramatically during the Renaissance period, when lifelike representations of people, objects, and places became the norm. The Italian artist Leon Battista Alberti (1404–72) was one of the first to offer systematic rules for the imitation of nature and the use of linear perspective, while Leonardo da Vinci (1452–1519) employed atmospheric perspective, in which light and shade help to create startlingly realistic effects.
In the modern world, perspective techniques that originated in the Renaissance find new applications in a variety of new digital media. Artists in the fields of computer-generated imagery and computer graphics are able to achieve ever more realistic and convincing representations of the world. The principles of perspective may be applied in an unprecendented blurring of the boundaries between virtual and real or non-virtual worlds. In this sense, the idea of perspective can be seen as playing a fundamental role in our modern-day experiences of reality. LWa
c. 400 BCE
Materialism
Democritus
The idea that nothing exists independently of the material or physical world
Many ancient thinkers appeal to supernatural or extra-natural entities in order to account for certain features of the natural world. Materialists, however, deny the existence of any nonnatural events, entities, or forces. Early materialists include the Greek atomists, Democritus (c. 460–c. 370 BCE) and Leucippus (fl. early fifth century BCE), who argued that the world consists of nothing but atoms in empty space (even the soul was thought to be composed of atoms), and Epicurus (341–270 BCE), who postulated that the atoms move only in an up–down direction.
The significance of materialism is typically found in discussions of philosophical questions, such as how to account for the properties of objects and how to explain consciousness. For example, while Plato (c. 424–c. 348 BCE) sought to explain why, say, two blue objects look exactly the same by arguing that they participate in preexisting (ante rem) universals, Aristotle (384–322 BCE) argued that all universals are present in existing objects (in re), and was thus a materialist about properties. However, both men seem to appeal to an immaterial divine being to explain the origin of physical reality, and to an immaterial soul to explain consciousness. Thus, it was deemed possible to be a materialist about some things and not others.
The comprehensive materialism of the sort defended by the atomists gained popularity in the late nineteenth and early twentieth centuries as advancements in science reduced the apparent need for extra-natural explanations, and pluralism in mathematics challenged the idea of a unique, Platonic reality of mathematical forms. More recently, advancements in our understanding of the brain have undermined older appeals to immaterial substances or properties to explain consciousness, but they have also served to highlight the limitations of materialism. JW
c. 400 BCE
Determinism
Leucippus
The view that all events occur as a result of prior events and the laws governing reality
Although determinism does not entail naturalism (the view that there are no extraor supernatural causes), it is usually defined in terms of natural laws and events. Determinism should be distinguished from fatalism, which is the view that some future events will occur regardless of what happens between the present and that future time. Determinism is fatalistic, in the sense that the current state of events and the laws of nature entail that certain events will occur rather than others. But it is not identical to fatalism, which holds that these events will occur regardless of other occurrences.
The earliest version of determinism is probably best associated with the views of the atomists, Leucippus (early fifth century BCE) and Democritus (c. 460–c. 370 BCE), although Leucippus seems to allow that, in rare cases, atoms may “swerve” unaccountably. Determinism was popular with Roman stoics and found support in the physics of Isaac Newton.
“A man can do what he wants, but not want what he wants.”
Arthur Schopenhauer, philosopher
Determinism is significant in the history of thought primarily in its relationship to the “free will problem,” that is, the question of what sort of freedom is required for morally responsible agency. If responsibility demands that agents be free to choose among a variety of options at the moment of decision making, then determinism is incompatible with moral responsibility. And even the sort of indeterministic luck highlighted by Leucippus’s swerving atoms may be incompatible with moral responsibility. However, if responsibility is a matter of internal dispositions toward actions, or “reactive attitudes,” determinism may be compatible with moral responsibility. JW
c. 400 BCE
Hippocratic Oath
Hippocrates
A seminal statement of the responsibilities of the medical profession
The Hippocratic Oath describes the duties of a physician. It is traditionally attributed to Hippocrates (c. 460–c. 375 BCE), who is credited with initiating the idea of medicine as a profession. In its original form, the oath is contained in the Hippocratic Corpus, a body of writing attributed to Hippocrates himself and his students.
The original Hippocratic Oath was sworn to Apollo, his son Asclepius, and Asclepius’s daughters Hygieia and Panacea—the Greek mythological figures associated with healing, medicine, health, and cures. It begins with the physician’s duties to fellow physicians, only later turning to their duties to patients. Centrally, the oath requires physicians to treat the sick to the best of their abilities and with their best judgment. Further clauses have been interpreted as prohibiting physicians from engaging in abortion, euthanasia, and surgery, although it is sometimes argued that the oath only requires physicians to avoid use of a particular means of abortion, to refrain from poisoning their patients, and to leave surgery to specialists.
“I will prevent disease whenever I can, for prevention is preferable to cure.”
The Hippocratic Oath (Louis Lasagna version, 1964)
Updated and modernized forms of the Hippocratic Oath are in circulation; the Declaration of Geneva (1948) and Louis Lasagna’s modern version of the oath (1964) are the best known. There are also various descriptions of the duties of the physician from different ethical and religious traditions, such as the Oath of Maimonides (c. 1793). While some physicians regard statements such as the oath as valuable ethical guides, others consider them to be mere formulaic relics. In any event, physicians continue to wrangle about how to understand and codify their professional duties. GB
c. 387 BCE
The University
Plato
An educational institution where the most learned and experienced teach others
The Anatomical Theater in the Archiginnasio of Bologna, once the main building of the University of Bologna.
The world’s first university—a school dedicated to the pursuit of inquiry, thought, and discovery—began in a sacred olive grove outside the city of Athens. Dedicated to Athena, the goddess of wisdom, the academy of Greek philosopher Plato (c. 424–c. 348 BCE) was founded sometime around 387 BCE. The academy was a place of learning where Plato, and later others, would teach students about subjects such as philosophy and politics.
Numerous different academies were subsequently established in the ancient Greek, Roman, and Byzantine worlds. The first modern university, however, is widely recognized to be the University of Bologna. It was founded in 1088, and Holy Roman Emperor Frederick I Barbarossa later granted the school a charter, in 1158, at the same time giving university academics the right to free travel. At first, religious teachings dominated university education, but by the eighteenth and nineteenth centuries the role of churchmen in society had diminished, and much of the curriculum in many universities came to be focused instead on the study of science and the humanities.
“There are few earthly things more beautiful than a university.”
John Masefield, poet
The idea of the university is to create a place where students and teachers can freely acquire information, pursue knowledge, and make new discoveries. While everyone learns in their day-to-day lives, creating a place where higher learning is the sole pursuit has allowed for any number of advancements. There is no field of human endeavor that has not been impacted by advancements made by students or teachers who pursue knowledge in the university. MT
c. 360 BCE
Theory
Plato
A summarized process of contemplation and understanding that can be supported by evidence
A marble bust of Plato. Plato’s work contains discussions on aesthetics, political philosophy, theology, cosmology, epistemology, and the philosophy of language.
The modern concept of theory derives from theoria, an ancient Greek notion of contemplation and metaphorical vision associated with wisdom. Theoria is a verb used to describe a broad range of intellectual activity. Plato (c. 424–c. 348 BCE) associated it with every stage of the intellectual process of understanding, a process that can be summarized afterward. Philip of Opus (fl. c. fourth century BCE), a follower of Plato, restricted theoria to the activity of looking at the stars, with the aim of acquiring the divine perspective and thereby achieving tranquility. Aristotle (384–322 BCE), another of Plato’s pupils, limited it to the act of contemplation.
“The ultimate value of life depends upon … the power of contemplation.”
Aristotle, philosopher and polymath
The notion of obtaining a vision of reality underwrites the contemporary concept of theory as an account of some feature of reality that results from careful investigation. For example, Albert Einstein’s special theory of relativity was formulated after careful consideration of Galilean relativity and the limitations of Newtonian physics. Similarly, Charles Darwin’s theory of natural selection was formulated after careful consideration of the development of organisms in a variety of environmental contexts.
Like all scientific claims, theories are subject to new discoveries and may be overturned by more adequate accounts. For example, although popular for centuries, Ptolemy’s geocentric theory of the cosmos was overturned by Copernicus’s heliocentric one. And although his theory of falling objects persisted unchallenged for 200 years and remains eminently practical, Newtonian gravity was replaced by Einstein’s general theory of relativity. In popular usage, a “theory” is often treated as an account that has yet to be tested (synonymous with “hypothesis”), but in academic circles theories are well-supported but not infallible visions of reality that help us to explain natural occurrences and to suggest new investigations. JW
c. 360 BCE
Philosophical Realism
Plato
The idea that reality, including abstract reality, exists independently of observers
If a philosopher defends the claim that an object or event exists in some relevant sense, that philosopher is said to be a realist about that object or event. For example, if a philosopher defends the existence of numbers as abstract objects, as does Plato (c. 424–c. 348 BCE), that philosopher is a realist about numbers and abstract objects. A philosopher who denies the existence of numbers and abstract objects, such as William of Ockham (c. 1285–1347/49), is an anti-realist or “nominalist.” Traditionally, a nominalist is an anti-realist about abstract objects or universals who asserts that the only things that exist are concrete. However, this causes some confusion when discussing medieval philosophers such as William of Ockham and Peter Abelard, who identify themselves as theists but defend the existence of an immaterial divine being. Thus, in philosophy, the term “nominalism” is commonly used synonymously with anti-realism.
It is important to note that realism need not imply objective existence; an event or object may exist even if it is only subjectively available to a subject or even if it is socially or mentally constructed, as in the case of conceptualism, although some conceptualists reject the nominalist appellation. Thus, realism allows for a wide-ranging interpretation of what may be termed “positive ontological status.”
Popular debates over realism about particular objects or events include the existence of God, the nature of moral values, the effects of social and political interactions, and the nature of consciousness. One of the longest-running debates between realists and anti-realists involves the nature of properties. Indeed, a strong contemporary strain of anti-realism challenges the idea that science delivers approximate truths about reality, and contends, instead, that it delivers only a set of powerful fictions about it. JW
c. 360 BCE
Universals and Particulars
Plato
“Particular” objects each embody a “universal” quality apparent in many objects
The idea that there are “universals” and “particulars” that might relate to one another in a certain way has plagued intellectuals since Plato (c. 424–c. 348 BCE) advanced the idea in about 390 BCE. “Particulars” are objects, such as a fire engine or pair of mittens. Fire engines and mittens, however, might both be red and thus share a common or “universal” redness. What is more significant, a particular or the universal it embodies? Is it an orange or its color, a zebra or its stripes, a golf ball or its shape? Universals can quite literally be everywhere, whereas particulars are limited to a specific entity and so are rooted in time and space.
“If particulars are to have meaning, there must be universals.”
Plato
Thinkers who hold to the primacy of universals over particulars claim that they possess a pervasive reality (the color red, or a certain geometric shape) that is quite distinct from the particulars that the universals inhabit. There are not just objects in the world that have stripes; the universality of stripes also exists. Whether universals exist is an ancient argument in metaphysics. In this debate there are nominalists, who deny they exist, and realists, who maintain that they do. Realists in the Platonic tradition claim that universals are real entities that exist independently of particulars, suggesting that we need certain universals for words to have meaning and for sentences to be either true or false. Nominalists, on the other hand, argue that only individuals or particulars are real, and that universals simply do not exist. Nominalism posits that there are relatively few kinds of entities in the universe, and by eliminating unnecessary complexities therefore offers a simpler explanation of the world around us. JS
c. 360 BCE
Obeying Laws
Plato
Laws should be adhered to because it is immoral to disobey them
A painting (1787) by Jacques-Louis David, showing Socrates after receiving his death sentence.
Laws—a set of rules providing a code of conduct for a community—have existed in human societies for thousands of years. Typically, disobeying a law is viewed as a crime and results in a punishment for the perpetrator, thereby providing an incentive to citizens to abide by the laws of their society. Plato (c. 424–c. 348 BCE), however, argued that a person ought to obey the law not merely to avoid punishment, but because they have a moral obligation to do so.
Written in c. 360 BCE, Plato’s Crito explains why the condemned Socrates refused to escape from prison and his sentence of death despite the measures taken by his friends to ensure his safe passage. Socrates, who speaks for Plato in the text, offers a number of reasons for why obeying the law is more important than obeying one’s own desires or the moral sentiments of the masses. After establishing that it is never morally permissible to do harm, Socrates argues that willfully disobeying the law harms the state by undermining the system of laws it is founded upon. Socrates then argues that it is because of the law that citizens enjoy security, education, and a number of other benefits. Furthermore, in so far as the law regulates marriage, the law is responsible for the very existence of its citizens. For these reasons, the laws are like a person’s parents or masters. It is, therefore, unacceptable for anyone to harm the law through disobedience.
Plato’s arguments continue to be influential today. What is especially significant is that he draws a distinction between the precepts of law and the precepts of morality, explaining why the distinction is both rational and pragmatic. Today this distinction is referred to as the “separation thesis.” Modern thinkers such as Joseph Raz follow Plato when they argue that the legitimacy of law and, by extension, the social institutions it protects are contingent upon maintaining a distinction between law and morality. DM
c. 360 BCE
The Philosopher King
Plato
The idea that the ruler of a nation should be a learned philosopher
A fragment of Plato’s The Republic (c. 360 BCE), his great work of philosophy and political theory.
In his Socratic dialogue The Republic, written in c. 360 BCE, Plato (c. 424–c. 348 BCE) puts forward his theory of ideal leadership. He argues that, in the best possible world—in which humanity can form a nation or a state that pursues utopian ideals of justice, knowledge, and truth—a nation can only be led by a select type of person. That ruler, called the “philosopher king,” must be first and foremost concerned with the pursuit of wisdom, arrived at through philosophy.
In The Republic, Plato describes an idealistic state named Kallipolis, ruled by wise guardians, or philosopher kings. For Plato, a philosopher was a person who pursued truth and maintained a detachment from factors irrelevant to that pursuit. A philosopher king focuses only on that which is truly important. For Plato, this meant that a leader must develop an understanding of fundamental truths, or forms, and not submit to the whims of popular opinion. Just as a ship’s captain with knowledge of celestial navigation must not pay attention to the derision of sailors who have no such understanding, so too must the philosopher king maintain the pursuit of justice and wisdom as he pilots the ship of state.
The Republic, along with its ideas of a utopian society and ideal rulers, has been one of the most influential works about political theory and justice in the Western world. The idea of the philosopher king has shaped how people have thought about political leadership and the role of government, and about how ideal leaders should act, think, and govern. However, some critics, such as British philosopher Karl Popper (1902–94), have decried the idea of rulers permitted to ignore the opinions and wishes of the people. The concept of the philosopher king implies no safeguard against tyrannical individuals who impose their own ideals of leadership, and it is therefore antithetical to the idea of an open society. MT
c. 360 BCE
The Great Chain of Being
Plato
The belief that everything in creation, including living things, fits into a hierarchy
An allegorical depiction of the Christian world order, from Rhetorica Christiana (1579) by Fray Diego de Valadés.
In The Republic (c. 360 BCE), Plato (c. 424–c. 348 BCE) uses the idea of a single vertical line to represent all of existence. Plato divides the line into quarters: the topmost quarter represents concepts or ideas, such as justice or piety. The next quarter represents mathematical relationships. The third quarter represents tangible things, such as apples, horses, and carts. Finally, in the lowest quarter, are imitations: artistic imagery, reflections, and things of that sort.
Plato’s line represents a hierarchy. Ideas, says Plato, are timeless, indestructible, and dependent upon nothing else for their existence; they therefore exist to the fullest extent. Conversely, imitations are finite, and corruptible, and they depend upon other things; they are therefore the lowest sort of beings.
“It is the possession of sensation that leads us … to speak of living things as animals.”
Aristotle, De Anima (fourth century BCE)
Plato’s student Aristotle (384–322 BCE) elaborated on this. He established a hierarchy of living things, ranking those creatures possessing the powers of thought, sensation, self-propulsion, and self-nutrition higher than creatures possessing fewer of these “vital powers.” Humans, then, occupy the top, and plant life the bottom.
Medieval thinkers added divine beings to the hierarchy, placing God at the pinnacle and an array of angels and archangels between God and humankind. Thus, what started out as Plato’s vertical line came to be known as the “Great Chain of Being.”
Contemporary evolution theory has adopted and adapted the idea. What Aristotle calls “vital powers” have undergone serious reconsideration, giving rise to animal and environmental rights movements and the reevaluation of humanity’s role in the universe. DM
c. 360 BCE
Justified True Belief
Plato
Knowledge consists of beliefs that are objectively true and can be justified
The most widely accepted account of knowledge is mental assent to a proposition that is true on the basis of evidence sufficient for thinking that proposition is true. “Mental assent” refers to the psychological attitude “belief,” and “evidence sufficient for thinking a proposition is true” refers to “justification.” Thus, knowledge is justified true belief.
Originating in the dialogue Theaetetus (c. 360 BCE) by Plato (c. 424–c. 348 BCE), this account was criticized by Socrates with regard to knowledge derived from the senses. The definition refers only to propositional knowledge (“I know that Mozart was a composer.”). It is unclear how it might relate to skill-based knowledge (“I know how to play Mozart’s ‘Adagio in C.’”) or knowledge by acquaintance (“I know [of] Mozart’s ‘Adagio in C.’”).
“If one cannot give and receive an account of a thing, one has no knowledge of [it].”
Plato, Theaetetus (c. 360 BCE)
An important challenge to this theory of knowledge as justified true belief is the “Gettier problem.” It was first offered by Bertrand Russell (1872–1970), but made famous by Edmund Gettier (b. 1927). Russell and Gettier constructed examples to show that someone may have a justified true belief without having knowledge. Russell imagined someone, S, who believes truly that it is noon on the basis of looking at a clock. However, S does not know that this clock stopped working exactly twelve hours ago. So, even though it is noon, S has good reason to believe it is noon, and S believes for this reason that it is noon, it seems S does not know it is noon. These “Gettier cases” sparked a number of revisions and additions to the classical theory of knowledge, and the quest for a satisfactory account of knowledge continues. JW
c. 360 BCE
Absolute Power Corrupts Absolutely
Plato
The view that possession of absolute power inevitably has a corrupting effect
A detail from Ambrogio Lorenzetti’s fresco Bad Government and the Effects of Bad Government on the City Life (1337–39), located in the Palazzo Pubblico, Siena, Italy.
Probably the most ancient expression of the idea that power has a corruptive effect appears in the parable of the Ring of Gyges in The Republic (c. 360 BCE) by Plato (c. 424–c. 348 BCE). In the parable, the otherwise virtuous Gyges indulges in corrupt behavior after finding a magic ring that renders him invisible.
“Power tends to corrupt, and absolute power corrupts absolutely … There is no worse heresy than that the office sanctifies the holder of it.”
Sir John Dalberg-Acton, letter to Bishop Mandell Creighton (1887)
However, the maxim “absolute power corrupts absolutely” originates much later, being a paraphrase of a letter written by Sir John Dalberg-Acton (1834–1902), a British Catholic historian better known as Lord Acton, to Bishop Mandell Creighton in 1887. Acton scolds Creighton in the letter for his suggestion, in previous correspondence, that the pope, king, or any other person holding comparably high station ought to be judged according to standards different to those applied to common men. Acton argues that, quite to the contrary, “Historic responsibility has to make up for the want of legal responsibility. Power tends to corrupt, and absolute power corrupts absolutely.”
Acton, however, followed at least two distinguished persons in associating power with corruption: in a speech that was delivered in the British House of Commons in 1770, Prime Minister William Pitt, Earl of Chatham (1708–78), had claimed that, “Unlimited power is apt to corrupt the minds of those who possess it; and this I know, my Lords, that where law ends, tyranny begins!” Acton’s observation was also anticipated by French writer, poet, and politician Alphonse Marie Louis de Prat de Lamartine (1790–1869), who, in his essay France and England: a Vision of the Future (1848), had claimed “It is not only the slave or serf who is ameliorated in becoming free … the master himself did not gain less in every point of view … for absolute power corrupts the best natures.” Acton, too, believed that few could resist power’s corrupting effect, asserting, “Great men are almost always bad men.” DM
c. 360 BCE
Compulsory Education
Plato
A system of education that begins at birth and identifies society’s future leaders
A second-century relief from a Roman burial monument, depicting a boy reading to his teacher. Schooling was provided for boys only during Roman times.
The notion of compulsory education refers to a period of education mandated by law or by some comparable authority. One of the earliest efforts to codify requirements for education is set out in the Talmud, the compendium of Jewish law. The Talmud recommends a form of private education in the family home that emphasizes religious matters in addition to training in whatever the family vocation might be.
“I would teach children music, physics, and philosophy; but most importantly music, for the patterns in music and all the arts are the keys to learning.”
Plato, The Republic (c. 360 BCE)
Plato (c. 424–c. 348 BCE) was one of the earliest thinkers to draw up the architecture of a full-blown system of public education. In The Republic (c. 360 BCE), he describes an education system designed to effect the social stratification that, according to him, is prerequisite for justice to prevail in a state. The education system of his republic begins at birth, when infants are removed from the family and raised by a collective. Educators are tasked with monitoring children in order to identify leadership qualities so that those who have “gold in their souls” (Plato uses this precious metal as a metaphor for leadership potential) can be properly trained to assume elevated offices of state, the highest of which is the office of philosopher king.
In Laws (c. 360 BCE), a later work, Plato presents a more moderate education system, one that more closely resembles contemporary systems. Infants are not removed from their families and there are no philosopher kings. However, proper social stratification is still the objective. Formal schooling begins at the age of six, when the curriculum focuses on literacy and arithmetic. By age thirteen, music is introduced into the curriculum, and at age eighteen the youth begins his terms of military service. By the age of twenty-one, those students demonstrating the necessary aptitudes are selected for advanced studies that lead to the highest offices of the state. Education systems surprisingly close in character to this ancient model are now the norm in every developed country. DM
c. 360 BCE
Idealism
Plato
Experience of the world is all there is, and our minds are the only things that are real
For an idealist philosopher, the human mind is the fulcrum upon which reality rests. Nothing exists unless we perceive it, sense it, or know it. Though there are many different types of philosophical idealism, it can be divided into two basic forms. The first is epistemological idealism, a position that holds that a person’s knowledge of the world only exists in the mind. The second is metaphysical idealism, which states that reality itself is dependent on our perceptions and our minds, and that the idea of an independent reality, a physical reality, is nonsensical.
In his work The Republic (c. 360 BCE), Greek philosopher Plato (c. 424–c. 348 BCE) relates the Allegory of the Cave—a dialogue between Plato’s mentor, Socrates, and Plato’s brother, Glaucon—to illustrate the role of education in the development of the mind and its gradual understanding of ideal reality. At the allegory’s heart is Plato’s Theory of Forms: these nonmaterial, abstract entities are, for Plato, the highest level of reality.
“Those bodies which compose … the world have not any substance without a mind.”
Bishop George Berkeley, philosopher
Later thinkers, such as Irish philosopher Bishop George Berkeley (1685–1753), reintroduced idealism, saying that the objects we encounter and perceive do not really exist, but only our perceptions of them do. In Berkeley‘s words, “Esse est percipi”: To be is to be perceived.
Idealism had its heyday during the early nineteenth century. It was so popular, and controversial, that it garnered strong reactions and moved other thinkers to create countering positions. As such, it has played an important part in the development of logical positivism, analytic philosophy, and Marxism. MT
c. 360 BCE
Allegory of the Cave
Plato
Plato’s metaphor for the human condition and the need for education
An illustration of Plato’s Allegory of the Cave that appeared in the July 1855 edition of Magasin Pittoresque.
The Allegory of the Cave appears in Plato’s (c. 424–c. 348 BCE) Socratic dialogue, The Republic (c. 360 BCE). It begins with an underground cave that is inhabited by prisoners who have been chained there since childhood. The prisoners can only look toward the back of the wall, where flickering shadows stimulate their imaginations and cause them to think that all they imagine is real. However, if a prisoner were to get free and see the cause of the shadows—figures walking in the vicinity of a flickering fire—he would begin to reassess what he thought real. Moreover, if this prisoner were to escape from the cave, he would then be able to see the sun itself, which illuminates everything in the world in the most real way. However, if this free man were to return to the cave to explain his findings to the other prisoners, he no longer would be accustomed to the darkness that they share, and to those ignorant people he would sound like a fool or worse.
“Imagine people living in a cavernous cell down under ground …”
Plato, The Republic (c. 360 BCE)
This allegory has been highly influential in the history of philosophy for its succinct depiction of Plato’s epistemological, ethical, metaphysical, and educational thought. The cave represents our world; we humans are prisoners who imagine such things as sex, power, and money to be the overwhelmingly real and important things of life when, in fact, they are shadows of greater goods that we have the capacity to know. The fire is the inspiration that helps us ascend until we finally come face to face with reality. The liberated prisoner’s descent back into the cave represents the ethical duty of the philosopher, who, having discovered the truth, tries to help others seek enlightenment. AB
c. 360 BCE
Platonic Love
Plato
The type of love between two people that transcends obsessive physicality
Platonic love as it is understood today is a love between two people that is chaste, affectionate, but free of intimacy and sexual desire. The term has its roots with the Greek philosopher Plato (c. 424–c. 348 BCE), who used it in his philosophical text The Symposium, written in c. 360 BCE. In the text, Plato dissects a series of speeches made by men at a drinking party, or symposium, held in the Athenian household of the poet Agathon. The speeches, expressed in the form of a dramatic dialogue, are written “in praise of love,” and those invited to speak include an aristocrat, a legal expert, a physician, a comic playwright, a statesman, Plato himself in the roles of both host and tragic poet, and Socrates (c. 470–399 BCE), Plato’s own teacher and one of the founders of Western philosophical thought.
“We ought to have lived in mental communion, and no more.”
Thomas Hardy, Jude the Obscure (1895)
It is Socrates’s speech that has since been interpreted as introducing the concept of platonic love. Socrates condemns the sort of love that sees a man and a woman obsess over the physical act of love (eros in Greek) to the detriment of the pursuit of higher ideals in philosophy, art, and science. He speaks of the ideas of a prophetess and philosopher, Diotima of Mantinea, for whom love is a vehicle through which we can contemplate the divine and possess what she calls the “good.” According to Diotima—here “teaching” with Socrates in the role of “naive examinee”—a physically beautiful person should inspire us to seek spiritual things. Her idea of love does not exclude the possibility of physical love, however; the idea that platonic love should exclude physical love altogether is a later, and quite inaccurate, Western construct. BS
c. 360 BCE
Formalism in Art
Plato
The concept that a work’s artistic value is entirely determined by its form
Plato (c. 424–c. 348 BCE) believed that the identity, usefulness, and essential meaning of any given thing is bestowed upon it by the idea of that thing, or, rather, its form. According to Plato, the physical constituents of a particular thing are perishable, fleeting, and replaceable, and are therefore irrelevant to its essential nature—Christopher Columbus might replace a plank on the Pinta, but that does not, in Plato’s view, make it a different ship. Applied to art, Plato’s principle argues that it is form alone that bestows meaning upon any given work of art. This philosophy of art has come to be known as “Formalism.”