Thank you for downloading this Atria Books eBook.

*

Join our mailing list and get updates on new releases, deals, bonus content and other great books from Atria Books and Simon & Schuster.

CLICK HERE TO SIGN UP

or visit us online to sign up at

eBookNews.SimonandSchuster.com

Preface

By Arthur Caplan

These days, it is not uncommon to hear commentators on higher education accuse those who spend time studying the humanities in college or university of being foolish. The idea that a person might take courses in philosophy, psychology, religion, the arts, sociology, or politics strikes many as simply ludicrous. They argue that the whole point of education is to get a job, and that to get a job a person needs to have a practical skill or possess a body of readily applicable knowledge. These goals make the study of “big ideas” in the humanities or social sciences at best ludicrous and at worst pointless.

As you begin to browse through 1001 Ideas That Changed The Way We Think, one of the most important things that you realize is just how utterly wrong are those people who see the value of education only in terms of the career opportunities it creates. A cursory look through this book soon makes it evident that the discoveries, inventions, and findings that make the most difference in our lives are just as likely to emanate from the humanities and social sciences side of the intellectual landscape as they are from technology, science, and engineering. In fact, the latter are only likely to flourish when firmly embedded in a wider context of big ideas that allow them to do so.

Moreover, the rich history of ideas that can be found in this book reveals an even more important truth—what it is that makes life meaningful is not found simply in an education that prepares one to excel at computer programming, advertising, bioengineering, or business. It is only through an engagement with great ideas that we can find meaning and purpose in our lives. These ideas enable us not only to decide on but also to defend the personal views that we assume regarding important matters. Is there an absolute moral ethic that you should always follow, as Plato claims? What is it that we actually see when looking at art, as John Berger asks? Do you agree with the American Psychiatric Association’s diagnostic manual classification of what is “normal” with respect to mental health? Should you live your life wary of the lure of the ephemeral, materialistic trinkets and baubles that are shilled by capitalism, as Jean-Jacques Rousseau urges? The way that you live your life and the way that you choose to present yourself to the world will be far more enriched by reading this book than by spending years in a graduate school of business, law, or accounting. Indeed, the rationale for doing the latter ought only to be grounded by an immersion in the former.

There is yet another reason why the entries in this book merit your attention—they are intellectually challenging, and as a result are fun to think about. Jeremy Bentham’s full-throated articulation of utilitarianism provides a fascinating antidote to centuries of morality reliant on divine or royal authority, virtue, and inviolate principles. His thinking is reflected in every cost-benefit and risk-benefit analysis that accompanies nearly all public policy thinking today. Even recent efforts to install happiness as the correct measure of a nation’s overall status, as suggested by King Jigme Singye Wangchuck of Bhutan, owe a great deal to Bentham. However, for all his brilliance, Bentham could fail in applying his theories to social reform. His efforts to promote the Panopticon as an alternative to the harsh, often inhumane conditions that dominated prisons and asylums, with their tiny quarters, bars, locks, lack of windows, and chains, were noble. But the notion of being subject to permanent surveillance caused other problems, because a certain amount of personal privacy is often considered requisite for maintaining a person’s sense of self and self-esteem.

In looking forward, there are plenty of big ideas reviewed in this book that can help to provide conceptual handrails for what might be in our future. Perhaps one of the most provocative is Garret Hardin’s “lifeboat Earth” argument that, in a world of limited resources with a growing population, we cannot let the poor, weak, and disadvantaged consume resources if it means that the ability of all to survive will be imperiled. While his dire forecast of the inevitability of life and death rationing has power, it may be that some of the other ideas examined in these pages will provide us with the tools to generate more resources or reduce population growth. By thinking big, we may be able to make the chances of Hardin’s bleak prediction coming true much smaller.

As Bentham knew, a big idea is worth promoting and even testing in the real world. And as will become apparent upon reading this book, a big idea is also worth criticizing, endorsing, dismissing, and amending—but only you will know which seems the most appropriate response to what you find in these pages. Enjoy.

New York, United States

Introduction

By Robert Arp

I am a philosopher by training, so, as philosophers seem naturally to be attracted to ideas of any kind, it makes sense that I would be the editor of a book like this one. The word “philosophy” is derived from the Greek words philo, meaning “love” or “desire,” and sophy, meaning “wisdom” or “knowledge.” Philosophy is therefore the “love of wisdom,” and an important way to attain knowledge is by exposing yourself to plenty of ideas.

But what exactly is an idea? The English word is a direct cognate of the Latin idea, which is derived from the Greek ίδέα. The Greek word ίδέα is itself a derivation of the Greek verb meaning “to see.” Ancient Greek philosopher Plato used the word “idea” to refer to “Forms”—unchanging, perfect, ideal things of which everything in the universe was a better or worse copy. There was a Form of human, a Form of cat, a Form of tree, even Forms of justice, beauty, and goodness. The Forms were real, existing things, and a person had to use their mind to reason, think about, and understand them, especially when philosophizing.

Plato’s definition of an idea might strike us as strange, because for him ideas were extra-mental things “out there” in reality—not visible to the eye, but knowable by the mind—whereas nowadays we think of ideas as concepts or images that exist, at best, in a person’s mind. Nonetheless, Plato’s concept of an idea was influential in Western history for many centuries. In the Middle Ages, medieval Christian philosophers and theologians—who borrowed much from the theories of Plato and his prize student, Aristotle—used the term “idea” to refer to an archetype of some thing that existed in the mind of the Christian God. According to this view, before God created the universe and all that it contains, He had the idea of the universe and all that it contains already in mind. It was also around this time that the word “idea” began to be used interchangeably with the Latin words conceptio (conception), conceptus mentis (mental concept), notio (notion), verbum mentale (mental word), and species intelligibilis (intelligible species).

By the seventeenth century and the birth of what is known as Modern philosophy, “idea” no longer referred to some Platonic concept existing outside of the mind. Rather, it had taken back its original Greek connotation of “to see.” For example, René Descartes noted that “Among my thoughts, some are like images of things, and it is to these alone that the name ‘idea’ properly belongs.” He also wrote that “idea is the thing thought upon.” By the time that John Locke penned his influential An Essay Concerning Human Understanding in 1690, the word “idea” had assumed a purely mental association: “the word ‘Idea’ stands for whatever is the object of the understanding when a man thinks. I have used it to express whatever is meant by phantasm, notion, species, or whatever it is which the mind can be employed about when thinking.”

Locke envisioned ideas to be objects of thought, a definition that was taken up in the twentieth century by the U.S. philosopher and educator Mortimer Adler (who cofounded the Center for the Study of The Great Ideas in Chicago in 1990). However, Adler added the caveat that an idea must be a common object “being considered and discussed by two or more individuals.” In his short article “What is an Idea?” (1958), Adler maintained that “freedom and justice, war and peace, government and democracy might be called ideas or objects of thought.” He contrasted this “objective” definition of an idea with a “subjective” definition, wherein an idea is understood to be an individual’s own conscious thought or perception. Adler’s notion of an objective idea was most clearly expressed in his classic work The Idea of Freedom (1958) in which he described five different conceptions of freedom and traced the discussion of them among philosophers throughout history.

Today, the word “idea” has several connotations and denotations, many of which are aligned to these historical conceptions. For example, the first two definitions of “idea” that are given in the Merriam-Webster Dictionary are: “a transcendent entity that is a real pattern of which existing things are imperfect representations,” and “a standard of perfection (ideal),” both of which have clearly been informed by Plato’s Theory of Forms.

In line with Descartes’s description of ideas as “images of things,” an idea can also be considered a perceptual image, or a picture in a person’s mind of something, including a sensation that has been “called to mind” through memory. For example, imagine the Eiffel Tower: the “picture” that this conjures up in your mind is an idea. These mental images do not always have to be fully formed, however: an idea can also be thought of as a more general “mental impression.” A good example of an idea in this sense is Einstein’s theory of special relativity. Unless you have a clear knowledge of physics, you probably have an inchoate notion in your mind that special relativity has something to do with E = m c 2, motion being relative, and objects traveling at the speed of light. You therefore have some idea or impression of special relativity, but it is vague, rudimentary, and obviously not as complete as the idea of special relativity in the mind of a person with a Ph.D. in Astrophysics.

Another understanding of an idea is as a concept. The words “thought” and “idea” are often used interchangeably by people, as are the words “opinion” and “idea.” These concepts, thoughts, and opinions are frequently referred to in terms of being understood or formulated, conveying the sense that the ideas behind them are clear in the speaker’s mind. Examples of this type of idea include “digestion,” “cost-benefit analysis,” or “gravity affects all material bodies in the Earth’s atmosphere.” There is an intimate connection between understanding a concept, being able to formulate a thought, and having knowledge. The most knowledgeable people in certain areas of study often have a solid understanding of the ideas that comprise that field.

An idea can also be read as synonymous with a goal, end, aim, or purpose—“I took a job with the idea of getting some money together,” for example. Or it can be a concept in a person’s mind that might be so abstract as to not be imaginable in a picture-like form, such as the idea of what constitutes the process of reasoning.

In the following pages, you will find 1,001 of the most important ideas that have ever been imagined, conceived, and articulated throughout the course of recorded history. These are 1,001 ideas that changed the way we think. Various dictionaries all describe thinking as a process that utilizes ideas in some way—to plan, predict, pronounce, produce, and numerous other activities—which is a straightforward and commonly understood notion. It is a simple fact that you cannot do any thinking without ideas! So many ideas have changed people’s way of thinking, with their impact ranging from small groups of individuals to entire societies and even the whole world. You would not be reading this book right now were it not for Johannes Gutenberg’s ideas of mechanical movable type and the printing press so as to “spread learning to the masses” in the fifteeenth century. And you would not be reading anything at all if the ancient Sumerians had not had the idea to design pictograms and an alphabet some 5,000 years ago.

It is possible to organize and classify ideas in many ways. In an attempt to be sensible and economical, the ideas in this book have been placed in one of the following categories: Philosophy; Religion; Psychology; Science and Technology (including mathematical ideas and inventive ideas); Politics and Society (including education ideas, legal ideas, and economic ideas); and Art and Architecture (including music ideas and literary ideas). In the text for each idea you will find a description of exactly what the idea is; an account of its origin (who or where the idea came from); a quotation that uses, or is about, the idea; and a brief explanation of why the idea is important.

1001 Ideas That Changed The Way We Think is ordered chronologically, but it has not always been easy to establish a definitive date for when each idea first appeared. Generally we have used the earliest recorded instance of the idea, or the date that the first work containing the idea was published. Time periods for each chapter have been simplified into the following historical eras: Ancient World (2,000,000 BCE to 499 CE); The Middle Ages (500 to 1449); Early Modern (1450 to 1779); Late Modern (1780 to 1899); Early Twentieth Century (1900 to 1949); and Contemporary (1950 to present).

You will notice that a good many of the titles of the ideas in this book appear more like an invention, mechanism, device, contraption, or even a process, activity, or system—such as the Kinetoscope (an 1890s machine that magnified looped film for a viewer), the telephone, the map, the magazine, the encyclopedia, or even waterpower, groupthink, nuclear fusion, and breakdancing. It can be hard to divorce these ideas from their aforementioned uses, but the authors have tried to present the idea behind the invention, or the idea that gave birth to a process, or the idea that acted as a catalyst for a system, rather than simply describe what that invention, process, or system does.

You will also notice that there appear to be numerous principles, laws, rules, theories, or hypotheses in this book. In these cases, the principles, laws, and the like are themselves the idea. Examples of this type of idea include the uncertainty principle, the second law of thermodynamics, the greenhouse effect, presumption of innocence, the Ten Commandments, and, one of my personal favorites, Godwin’s Law (coined by Mike Godwin in 1990), which states that if an online discussion continues long enough—on any topic whatsoever—someone in the discussion will inevitably make a comparison to Adolph Hitler or the Nazis.

I hope that you find as much joy in reading these 1,001 ideas as I did when the contributors submitted them to me to edit. As a final thought, I will leave you with a quotation from a speech given in 1963 by the United States’ thirty-fifth president, John F. Kennedy, that I recall from a class on the U.S. government during my teenage years: “A man may die, nations may rise and fall, but an idea lives on. Ideas have endurance without death.”

Ancient World

Pre 500 CE

A painting from the tomb of Ramses I (c. 1290 BCE), showing the Egyptian pharaoh with the gods Harsiesis and Anubis. Images like this one were common in Egyptian funerary art.

Archaeologist Steven Mithen has put forward the theory that, around 30,000 years ago, our hominin ancestors’ mental modules opened up, and ideas and information began to flow freely between them—a process that he termed “cognitive fluidity.” It is likely that the first ideas that humans came up with had a practical application, as in the case of the Levallois Technique for shaping flint tools. Early humans then applied creative thought to develop ideas such as clothing, jewelry, anthropomorphism, and Paleolithic cave art. Later, with the rise of the ancient civilizations of Egypt, Greece, and Rome, countless abstract concepts were formed in areas such as mathematics and philosophy.

c. 1,600,000 BCE

Human Control of Fire

Homo erectus

Harnessing fire in order to use its properties as a practical tool

Controlling fire has been a hallmark of human culture since before the existence of modern Homo sapiens. Early people obtained fire from natural sources, later developing a variety of methods to create fire artificially. The ability to create, control, and use fire remains essential to human civilization.

The first exposure that early humans had to fire most likely came from wild fires and forest fires sparked by lightning. While destructive and potentially deadly, they provided early access to the tool, although it was not a force that people could control, much less create at will. There is evidence to show that as early as 1.6 million years ago Homo erectus groups had harnessed fire to some extent, and by 400,000 to 250,000 BCE there is clear evidence that Homo erectus could control and perhaps even create it. By 125,000 BCE, well after the emergence of modern Homo sapiens, human use, control, and creation of fire were widespread and common.

“Fire, though it may be quenched, will not become cool.”

Ovid, ancient Roman poet

Humanity’s mastery of fire had an immediate and profound impact on its evolution. Fire gave people protection from wild animals, allowed them to illuminate the darkness, gave warmth to fend off the cold, enhanced their ability to fashion tools, gave them the ability to cook food, and served as an effective deterrent against insects and pests. Fire was so useful in the preparation of food that humans became the only animal that could nutritionally thrive by eating cooked but not raw food. Fire’s importance in culture is so marked that the word itself became a ubiquitous metaphor used to describe ideas such as romantic love, conflict, destruction, and intense desire. MT

c. 800,000 BCE

Cannibalism

Unknown

The practice of humans eating the flesh of other humans

Markings on these human bones, which date to around 12,000 years ago, are thought to indicate cannibalism.

The earliest evidence of cannibalism comes from butchered bones found in the Grand Dolina cave in Spain, dating back to c. 800,000 BCE. These bones suggest that the practice existed among members of western Europe’s first known human species, Homo antecessor, and similar findings from later periods show that it continued with the emergence of Homo sapiens and other hominid species. There are several theories as to why cannibalism first arose: one hypothesis suggests that it may have been a result of food shortages; another that it may have functioned as a form of predator control, by limiting predators’ access to (and therefore taste for) human bodies.

Cannibalism persisted into modern times in West and Central Africa, the Pacific Islands, Australia, Sumatra, North America, and South America. In some cultures, human flesh was regarded as just another type of meat. In others, it was a delicacy for special occasions: the Maoris of New Zealand would feast on enemies slain in battle. In Africa, certain human organs were cooked in rites of sorcery because witch doctors believed that victims’ strengths and virtues could be transferred to those who ate their flesh. In Central America, the Aztecs are thought to have sacrificed prisoners of war to their gods and then eaten their flesh themselves. Australian Aborigines ate their deceased relatives (endocannibalism) as a mark of respect.

“I ate his liver with some fava beans and a nice chianti.”

Thomas Harris, The Silence of the Lambs (1988)

The colonization of these regions between the fifteenth and nineteenth centuries by European Christians made cannibalism taboo. However, it occasionally still occurs in extreme circumstances. GL

c. 650,000 BCE

Clothing

Unknown

Garments, fabrics, or other coverings worn by humans over their bodies

The materials that early humans used to create the first clothing were probably those they found around them, such as pliable grasses, plant leaves, and animal skins. Because these materials decompose so easily it is difficult to determine when humans first created clothing. Researchers studying human lice have suggested that clothing could have become widespread as early as 650,000 years ago, while other studies suggest an origin of about 170,000 years ago. These time periods correspond to either the beginning or the end of an Ice Age, indicating that clothing may have first developed as a way of coping with colder climates.

The first items of clothing were most probably fairly crude in their construction, draped around the body and tied with sinew. The development of the needle around 35,000 years ago by Homo sapiens allowed the creation of more complex clothing—garments that could be layered and tailored to fit certain parts of the body. It has been hypothesized that this technology may have been what enabled Homo sapiens to flourish as a species over the Neanderthals, who were more adapted to the cold biologically and thus did not have the impetus to refine the cutting and sewing techniques that were needed for warmer clothes.

“Clothes can suggest, persuade, connote, insinuate, or indeed lie …”

Anne Hollander, Seeing Through Clothes (1975)

Although clothing may have been created out of necessity initially, it has since become far more than a means of adaptation to the environment. Throughout history it has been used to protect a wearer from the elements, but also as a way to convey nonverbal information, such as signaling differences in wealth, class, sex, or membership of a particular group. MT

c. 600,000 BCE

Honoring the Dead

Homo heidelbergensis

The practice of paying respect to a deceased person through specific rituals

The 60,000-year-old burial tomb of a Neanderthal man in the Chapelle aux Saints cave, France.

It is difficult to pinpoint when the idea of honoring the dead began. There is some evidence to show that Homo heidelbergensis (who existed between 600,000 and 400,000 years ago) were the first proto-humans to bury their dead. Whether they honored their dead or ascribed some kind of spiritual aspect to the burial process is unknown, however. There are human burial sites from about 130,000 years ago that show more convincing evidence that those performing the burial intended to remember or honor the deceased, through the position of the body, the inclusion of items such as tools and animal bones with the body, and the addition of decorative elements to the tomb. This suggestion of ritual in the burial process could indicate that it was one of the first forms of religious practice.

“Our dead are never dead to us, until we have forgotten them.”

George Eliot, author

In some cultures or traditions, honoring the dead is an ongoing practice in which deceased relatives or ancestors are viewed as having a continued presence among, or influence over, the living. In others, the traditions that honor the dead occur immediately after someone’s death, or at various times throughout the year. Honoring the dead is not necessarily a religious tradition, though many religions have specific and extensive rituals for the practice.

Honoring the dead is a near-universal practice that exists across geographical, cultural, and religious boundaries. The shared rituals involved in the custom provide a social bond in societies, and a way to link the deceased with the living. These elements are strongly present in many religious rituals, often forming the basis of individual, and cultural, identities. MT

c. 400,000 BCE

Using Sharp Projectiles

Homo heidelbergensis

Creating tools and weapons with sharpened points or tips

Two examples of Solutrean Points. The Solutrean tool industry existed between c. 20,000 and c. 16,000 BCE, and was characterized by finely crafted, leaf-shaped blades.

Humanity’s first use of sharp projectiles predates history, as three wooden spears found in Schöningen, Germany, show that Homo heidelbergensis had used projectile weapons by at least 400,000 BCE, and perhaps as early as 500,000 BCE. The longest of the three spears measured 7 feet 7 inches (2.3 m) long and all of them had a thicker section toward the front in the style of a modern javelin, which suggests that they were specifically used for throwing rather than thrusting. By 300,000 BCE, Homo neanderthalensis had begun using shaped stone spear points, and by 64,000 BCE stone-tipped arrow heads first appeared in South Africa.

“It is easy to dodge a spear that comes in front of you, but hard to avoid an arrow shot from behind.”

Chinese proverb

Until the development of sharp projectiles, humans had to rely on blunt weapons, such as rocks, throwing sticks, and their hands and teeth. Sharp projectiles were far superior to blunt weapons as they were not only deadlier, but also could be used from a greater distance. This allowed people to hunt larger, more dangerous game while retaining some measure of security. Sharp projectiles spurred technological development, leading inventors to develop new methods of shaping stones, developing woodworking techniques, and, eventually, mining and casting metals. As further evidence of their importance, groups of wild chimpanzees in Senegal have recently been observed to fashion their own sharpened projectiles from tree branches for use in hunting. The frequency of projectile use was found to be higher among female chimpanzees, leading researchers to speculate that females may have played a key role in the evolution of tool technology among early humans.

Ever since the appearance of sharpened projectiles, human cultures have refined, perfected, and revered them for their simplicity and deadly efficiency. As the primary tools of warfare and survival, they were not replaced until relatively recently in human history when firearms became effective and widely available. MT

c. 250,000 BCE

Levallois Technique

Neanderthals

Neanderthal craftsmen develop a technique for making better flint tools

A flint tool shaped using the Levallois technique, discovered in Montreres, France. The core of a Levallois flake is often described as looking like the shell of a tortoise.

Dating back around 250,000 years, the Levallois technique is the name given to a method of knapping flint that was developed by Neanderthals and other proto-humans. The name derives from the Levallois-Perret suburb of Paris, France, where tools forged by this technique were discovered during archaeological digs in the nineteenth century.

“In terms of cutting tools (whether as knives or projectile points), the Levallois technique produced superior pieces.”

Brian Patrick Kooyman, professor of archaeology

The Levallois technique is a more refined version of earlier forms of stone knapping, which involved chipping pieces away from a prepared stone core. It enabled the tool’s creator to have much greater control over the shape and size of the final flake. The technique begins with selecting a pebble about the size of a hand. A striking platform is then formed at one end of the stone, and the edges are trimmed by chipping off pieces around the outline of the intended flake. The base of the stone is then struck in order to produce its distinctive dorsal ridge. When the striking platform is struck, the flake releases from the stone with a characteristic plano-convex configuration and all of its edges sharpened by the earlier chipping. The flake is then ready to use as a knife or as the point of an edged projectile weapon.

Populations distributed over a vast geographical region, from Africa to Northern Europe, employed the Levallois technique. It allowed the Neanderthals to perfect their spear-making industry, which in turn aided in the hunting of large animals. Being able to kill larger animals, and therefore feed more individuals while spending less time hunting, aided in the formation of stable people groups, enabling greater sedentism. It also allowed for the production of projectile points for early bow and arrow technology. The fact that the Levallois technique was refined and perfected by the Neanderthals gives the lie to the popular conception of them as crude and apelike brutes. APT

c. 150,000 BCE

Trade

Unknown

Exchanging goods, services, and other items of value

The first exchange of goods or services came about long before written history. There is evidence that long-distance commerce existed as far back as 150,000 years ago, and by the time that humanity emerged from the Neolithic period (10,000–2000 BCE) and began establishing cities and agrarian communities, trading had been firmly established as a vital part of life. The move toward a sedentry, agricultural lifestyle transformed the nature of human society, creating a surplus of food that allowed humans to evolve new occupations such as toolmaking and weaving. These craftspeople in turn created a surplus of their products, which they were then able to trade back for food. Villages began to specialize in making products that were in demand in other areas, and by 3,000 BCE ancient Mesopotamians had established trade routes with the urban centers of the Indus Valley Civilization, perhaps linking disparate urban areas for the first time.

“Every man thus lives by exchanging, or becomes in some measure a merchant …”

Adam Smith, Wealth of Nations (1776)

Trade is an engine that drives economies, facilitates social interactions, spurs political change, and leads to the spread of ideas, languages, goods, cultures, religions, wealth, people, and diseases. Through trading, humans acquired goods from far off lands, shared news of events, and pushed themselves to seek out corners of the world unknown to them in search of new opportunities. Trade has both stabilized relationships between potential enemies and led to conflicts, wars, and the subjugation, murder, and enslavement of millions. Over the course of history, empires have arisen, fallen, and been reborn as basic human desires have driven the need for trade. MT

c. 135,000 BCE

Jewelry

Paleolithic Middle East

Personal adornment, often made from precious or valuable materials

Jewelry found at a burial site at the Balzi Rossi Caves in Liguria, Italy, which dates back 25,000 years.

The earliest known jewelry comes from the Paleolithic Middle East, where people used sea snail shells to make beads as early as 135,000 years ago. Jewelry is not an art form confined to Homo sapiens, however, because evidence exists to show that Homo neanderthalensis created and used jewelry in Spain at least 50,000 years ago. It is believed that these early forms of jewelry were most probably worn as a form of protection from evil or as a mark of status or rank.

Over the millennia, humans have fashioned jewelry from bone, stone, wood, shells, feathers, teeth, and other natural materials, with metallic jewelry first appearing around 5000 BCE. By about 3000 BCE the ancient Egyptians had begun crafting gold and silver jewelry, sometimes incorporating glass and precious gems into their designs. The Egyptians believed that every gemstone carried certain mystical powers, which would be transferred to the owner when worn as jewelry. This association of jewelry with the spiritual and mystical extended to burying jewelry with the dead to take with them to the afterlife—a practice that was a common feature of many ancient cultures. Much of the ancient jewelry that is held in archaeological collections today was discovered in tombs.

“Rich and rare were the gems she wore, And a bright gold ring on her hand she bore …”

Thomas Moore, “Rich and Rare …” (1808)

The development of jewelry provided humankind with both a new form of beautification and another method of communication. It is an art that lets the wearer feel more attractive, powerful, or important, while at the same time conveying a symbolic message that here is a person of wealth, piety, or influence, or even one who is available—or unavailable—for romance. MT

c. 40,000 BCE

Shamanism

Unknown

A magico-religious tradition built around a practitioner who contacts the spirits

A wooden figure representing a shaman associated with the Inuit spirit Taqhisim. The shaman relied on the spirits with whom he was associated for help in his duties.

Shamanism is the general magico-religious tradition built around the figure of the shaman, and is a phenomenon both ancient (dating back to at least 40,000 BCE) and global. Most of the oldest art in the world—“The Sorcerer” cave painting in France, for example—is shamanistic, and most of the oldest texts in the world—Mesopotamian and biblical texts, for example—allude to shamanistic practices such as necromancy (contacting the spirits of the dead). The word “shaman” is derived from the Tungus word saman, which refers to a “priest” or person—either male or female—who, in an altered state (such as a trance or a drug-induced hallucination), contacts the spirit world for help.

“It was not I who cured. It was the power from the other world, and the visions and ceremonies had only made me like a hole through which the power could come to the two-legged.”

Black Elk, Oglala Sioux shaman

Although the specific features of shamanism vary depending on the culture in which it is practiced (Japanese Shinto is different from African shamanism, for example), all shamanistic traditions share four basic characteristics. First, the shaman is seen as the intermediary between the human world and the spirit world. Second, the shaman only becomes such an intermediary by being chosen by the spirits and by performing certain rituals, such as the banging of a drum. Third, in their altered state of mind, the shaman is able to ask the spirits about how to cure certain diseases or to question what the future holds (fortune-telling). And fourth, the shaman is responsible for perpetuating the magico-religious tradition by recounting sacred myths and stories.

Some later religions, such as the Abrahamic ones, opposed shamanistic practices. This was not so much done because of the practices themselves (the shaman figure is very similar to a prophet or priest), but because the shaman was said to attain his knowledge in the wrong ways, through both good and bad spirits. Nevertheless, shamanism still endures today, making it one of the world’s oldest religious traditions. AB

c. 40,000 BCE

Anthropomorphism

Unknown

Attributing human characteristics to nonhuman entities

This lion-headed figurine, found in Hohlenstein Stadel, Germany, is one of the oldest sculptures in the world. It is made of mammoth ivory and dates back to c. 28,000 BCE.

Anthropomorphism—from the Greek words for “human” (anthropos) and “form” (morphe)—refers to the ancient activity of attributing human characteristics to nonhuman beings, such as deities, animals, vegetation, or the elements. Some of the oldest art—the Lion-Man of Hohlenstein Stadel (Germany), for example—depicts animals with human characteristics. Shamanistic traditions, which are connected with this type of art, tend to see spirits in all things, meaning that when they attribute human characteristics to trees—calling them “dryads,” for example—they believe that a tree spirit, much like a human spirit, is the principle that helps the tree to grow and act like a human. The same applies to all, or most, of nature.

“We acquire certain opinions of several animals and think of some as royal, others as silly, others as witty, and others as innocent.”

Philostratus, The Life of Apollonius of Tyana (c. 200)

A subcategory of anthropomorphism is anthropotheism, in which higher nonhuman entities—the gods or God—are depicted with human characteristics. Plato (c. 424–348 BCE) charged the Greek poets with “telling lies about the gods” because they depicted gods such as Zeus acting with petty human motives, and certain biblical passages, such as those describing God’s “right hand,” have often been seen as examples of anthropotheism.

In psychological terms, anthropomorphism has a number of implications. Ascribing human characteristics to a nonhuman entity can alter our views of and feelings toward that entity—it can make it seem more worthy of moral care and consideration, for example. The process of anthropomorphism can also be viewed as the mind’s way of simplifying complicated entities to enable us to understand them.

Today, anthropomorphism continues to be an important idea in shamanistic religions such as Taoism and Shinto. It also remains a prominent feature in popular culture, from cartoon characters such as Bugs Bunny to respected works of literature such as George Orwell’s Animal Farm (1945). AB

c. 40,000 BCE

Paleolithic Cave Art

Unknown

Powerful works of art drawn by prehistoric humans

The paintings in the Lascaux caves in France are estimated to be 16,000 years old.

Paleolithic cave art is a form of art dating back at least 40,000 years and distributed over a vast geographical area, from Europe, to India, to the Americas. Most of the paintings, such as those found in the caves of Lascaux in France, depict large equine animals: horses, cows, aurochs, and deer, as well as outlines of human hands. Curiously, full depictions of humans are absent in European cave art, but prevalent in African cave art. The caves themselves tend to be in places that are not easily accessible.

There are many theories about the origin of cave art. Henri Breuil (1877–1961) theorized that, given the number of large animals depicted in the artworks, it was likely an instance of “hunting magic” intended to increase the numbers of wild game hunted by early humans and Neanderthals. Another theory identifies cave art with early shamanistic rituals, perhaps, in some locations, involving the use of hallucinogenic substances. And some researchers have suggested that cave art may even have been an early form of animation.

“Seeing [the paintings] was startlingly intense … there was so much … I hadn’t expected.”

Gregory Curtis, The Cave Painters (2006)

Cave art seems to have emerged at the same time as modern Homo sapiens. However, we must not be too quick to attribute its existence to this development. Evidence suggests that at least some of the cave art in Europe was produced by Neanderthals. The art is a powerful visual link to our prehistory. And, as noted by Pablo Picasso, it tells us something about the art and culture of a particularly liberated proto-human culture. It is the art that humans produced when there were no traditions or rules of representation to tell them how art and culture must be produced. APT

c. 40,000 BCE

Mediumship

Unknown

The necromantic communication between a disembodied and an incarnate human

While a shaman (a priest-type figure able to contact the spirit world) seeks to communicate with spirits in general and for many purposes, a medium—a subcategory of a shaman—is usually interested only in facilitating communication between the spirit of a disembodied human and an incarnate one. Shamanism originated in c. 40,000 BCE, and it is probable that the practice of mediumship arose at a similar time.

In the ancient Near East, mediums—also called necromancers because they consulted the dead (necro)—were seen as a valuable source of advice. Through them, people could consult the spirits for guidance in important matters. However, in the past century or so, and due largely to the outpouring of death during World Wars I and II, mediumship in the West became popular not so much as a means of advice but as a way to establish closure with loved ones separated by physical death.

“Saul and two men went to the medium. ‘Consult a spirit for me,’ he said …”

The Bible, 1 Samuel 28:8

Mediumship is a very old, very influential idea and practice; nevertheless, it has often been viewed in a negative light, by both religion and science. The Abrahamic religions, for example, claim that though the power of a medium is occasionally real and the knowledge attained through such practices is valuable (knowledge per se is good), mediums should not be consulted because their method is not approved by God. Scientists take this even further, often seeing all mediums as frauds because their methods are difficult to test and their effects hard to quantify. A belief in and fascination with mediumship remains strong for many people, however. AB

c. 40,000 BCE

Soul

Unknown

The belief in a nonphysical entity with certain essential characteristics

A dead person’s soul travels through the underworld in this ancient Egyptian papyrus (c. 1600–1100 BCE).

The belief in the existence of souls has been prevalent in humankind for millennia. The concept of a soul is thought to have appeared around the same time as the emergence of shamanism in c. 40,000 BCE, which can be seen as the first example of religion. The discovery of ritual items at shamanistic burial sites suggests that those carrying out the burial believed in the afterlife, which in turn implies that they believed individuals to have a nonphysical component that survives after death. This nonphysical component—or soul—can be defined as the immaterial essence or animating principle of an individual life. It is generally viewed as separate to the body and is often credited with the faculties of thought, action, and emotion.

The oldest religious traditions—shamanistic, polytheistic, and monotheistic—generally agree that the soul grounds the identity of a given thing, and contains in it an organizing life-principle for that entity. Thus, for example, the vegetative life and identity of a rose is grounded in its soul, in the same way that the sentient life and identity of a zebra is grounded in its soul. For some religious traditions—shamanism, for example—the type of soul in a rose, zebra, or human is not clearly distinguished, which often leads to the notion that everything with a soul is of equal value. However, other traditions argue that the soul of a human is immortal and rational, and so is more valuable than the soul of a rose or a zebra, both of which are mortal and nonrational.

The near-universal belief that the soul of a human is immortal has led to the near-universal belief in both an underworld that houses the unworthy souls of the dead and a heavenlike place that welcomes the worthy souls. In the underworld the souls are seen in misery, partly because they are without bodies, whereas in the heavenlike place the souls are often depicted enjoying the fruits of the body. AB

c. 33,000 BCE

Symbols

Paleolithic Germany

A visual or material representation of a concept

The use of symbols to visually represent an idea dates back to prehistory. Even though the earliest cave paintings were created as long ago as 40,000 BCE, they were simply depictions: outlines of human hands that do not appear to have any symbolic meaning. Perhaps the earliest remaining human creation that could be considered a symbol comes from Hohle Fels cave near Schelklingen, in southern Germany. There, about 35,000 years ago, someone carved a mammoth tusk into a figurine depicting a woman with large breasts, large thighs, and a protruding stomach. The figure is widely believed to be a depiction of a fertility goddess or spirit, a symbolic representation of human reproduction and fecundity.

The first petroglyphs—figures and images engraved into rock—may have appeared about 30,000 years ago, but certainly existed as long as 10,000 to 12,000 years ago. There are many theories to explain the symbolic purpose of the different petroglyphs that have been found across the globe, including conveying time and distances traveled, establishing territorial boundaries, or even representing a form of ritual language. Since then, symbols have been universally used across cultures, and are ubiquitous in modern societies.

You are using symbols as you read this sentence. Your eyes scan the symbols called letters and effortlessly translate those letters into the sounds they represent, which together form words that you understand, allowing you to read. That is the power and importance of symbols: their ability to convey meaning and information beyond their form almost instantaneously. Symbols can cross language and cultural boundaries with ease, relating notices, warnings, advice, and complex messages at a glance. Their usefulness relies on their simplicity, and the ability of a human viewer to see an abstract visual depiction and readily draw meaning from it. MT

c. 25,000 BCE

Sedentism

Unknown

The process by which nomads settled in permanent locations

Anthropologists and archaeologists agree that the earliest humans were hunters who moved from place to place in search of sustenance. Gradually, however, our ancient ancestors found locations where the climatic conditions were favorable and the natural resources abundant enough to enable them to remain in the same place year-round, year on year. They were the first to adopt the lifestyle now known as sedentism.

The earliest recorded sedentary cultures developed between around 25,000 BCE and 17,000 BCE in Moravia (part of the modern-day Czech Republic) and on the plains of western Russia, where people went hunting and fishing from permanent bases. In around 10,000 BCE the Natufians—who had by then been settled for at least 2,000 years in parts of modern-day Israel, Jordan, and Syria—began to cultivate plants, a development that occurred at about the same time as the Jomon in Japan first cultivated rice. By 5000 BCE early Scandinavians had established sedentary sites on which they supplemented barley and other plant produce by raising cattle for milk, meat, and hide.

“Researchers realized quite early on that sedentism was not … straightforward …”

Bill Bryson, At Home: A Short History of Private Life (2010)

To the modern mind, it may appear natural to prefer a fixed abode to a life of constant wandering. Nevertheless, this major change in human behavior remains only partially explained: did nomads settle in order to farm, or did they settle because they had started to grow crops and to domesticate animals? The debate continues as to whether sedentism is a consequence of agriculture, or vice versa. Either way, the two developments were crucial to the establishment of modern civilization. GL

c. 20,000 BCE

The Dome Structure

Unknown

Stable architectural structure in the shape of a half sphere

The Great Stupa at Sanchi is the oldest stone structure in India, built in the third century BCE.

The dome is the most stable of all simple architectural structures, and evidence of dome-shaped dwellings dates back tens of thousands of years. Domes are among the first structures that prehistoric people used as shelter, and they were created using branches, animal hide, and mammoth tusks. The dome is architecturally appealing because it is inherently stable and evenly distributes forces applied to it across the surface of the dome and downward toward the base.

The dome structure can be found across the prehistoric and ancient world. The most common use of domes in ancient civilizations was in structures that were intended to be permanent, including religious buildings and tombs. The architecture of the dome was advanced the most by ancient Roman architects, who developed the use of angled bricks to create “true domes” for temples and other public buildings. The Pantheon in Rome, built in 126 CE, remained the largest dome in the world for more than 1,700 years. Domes are also a key feature of the architecture that developed during the Byzantine era (c. 330–1453), and became a dominant characteristic in the architecture of Muslim societies throughout the Middle Ages (c. 500–c. 1450).

“Just give me a home, in a great circle dome Where stresses and strains are at ease.”

R. Buckminster Fuller, architect

Domes have had a lasting impact on architecture and continue to be prevalent in building designs. Reflecting the ancient use of the dome for important public edifices, many modern governmental buildings feature domes. A contemporary development is the geodesic dome, a structure that combines overlapping circles to form stable interlocking triangles. This allows the architectural construction of complete spheres. TD

c. 12,000 BCE

Money

Ancient Anatolians

The use of currency to pay for goods or services

A Neo-Hittite relief from the tenth to eighth century BCE, found in Carchemish, Turkey, showing two merchants agreeing on the terms of a trade.

Early human cultures relied primarily on bartering to exchange goods and services. It was not until about 12,000 BCE that the first form of money emerged in ancient Turkey, when people begin using obsidian as a medium of exchange. Although obsidian could be utilized to create stone tools, those trading it did not necessarily use it for tool creation, instead employing it as an object of value. Some time between about 9000 and 6000 BCE, people begin using livestock, such as cattle and sheep, as a form of currency. By about the sixth century BCE, people in western Turkey had created the first form of currency after they melted gold into small coins imprinted with a stamped image. The first true paper money appeared in China in the eleventh century, consisting of banknotes that provided a written promise to pay someone in precious metal on presentation—thereby advancing the concept of currency beyond that of a tangible object of value. Although most currencies throughout history have been tied to the value of a commodity, such as gold or silver, modern systems use fiat money: money that is valuable solely because it is used as such.

“Money … is none of the wheels of trade: it is the oil which renders the motion of the wheels more smooth and easy.”

David Hume, Essays: Moral and Political (1741–42)

Without money, humans would have to barter whenever they wanted to engage in any kind of commerce. Bartering, though widely still in use, can be incredibly unwieldy. If a buyer does not have what a seller needs, or vice versa, no commerce can take place. With money, however, humans introduced a universally agreed-upon object that all parties recognized as valuable. This had a profound impact on human society, creating new links between people through trade, which in turn enabled people to break away from their traditional kin groups. Market economies and strong currencies became the key to thriving cities, underpinning the intellectual, cultural, and technological advances that evolved from these centers of commerce. MT

c. 12,000 BCE

Preserving Food

Mesopotamia

Using natural processes and substances to prolong the length of time for which food is edible

Fish dried naturally by the sun and wind, on sale at a market in Phuket, Thailand. Drying is one of the oldest methods of naturally preserving food.

There is archaeological evidence of food preservation in prehistory, dating the practice at least as far back as 12,000 BCE. It is a custom that has occured in virtually all cultures since then. The first method of natural preservation was drying, whereby food was simply left out in the sun, and since then numerous other techniques have been developed. These include refrigeration, freezing, smoking, salting, fermenting, chemical pickling (in which a food is placed in a preservative liquid), fermentation pickling (in which a food is allowed to ferment, producing its own preservative), sugaring, jellying, and (with increasing industrialization from the early nineteenth century onward) canning and bottling. These methods are sometimes employed jointly.

“Nearly everything we eat today has been treated … in order to prolong its life as a safe, transportable, salable, and storable food.”

Sue Shephard, Potted, Pickled, and Canned (2000)

It is difficult to overstate the importance of food preservation to human history as it was what enabled humans to make the important transition from hunting and gathering to settled agricultural communities. The ability to stockpile food freed people from the need to focus solely on hunting and gathering, and enabled them to develop new occupations such as toolmaking. The resulting cultural and technological developments enabled humans to make a huge leap forward in terms of civilization—and also sowed the first seeds of social disruption and inequity. Later, advances in food preservation enabled people to undertake voyages of discovery and conquest. Moreover, food preservation is responsible for creating new flavors and new kinds of food (such as cheese and wine)—and even rendering otherwise inedible foods (such as olives) palatable.

All of these methods of food preservation are still in use today. In the twentieth century, modern chemistry produced a variety of artificial food preservatives, and modern physics produced a new method of food preservation: irradiation. GB

c. 10,000 BCE

Mesolithic Sculpted Figurines

Unknown

The development of sculpture and material representations of abstract human ideas

This figurine from c. 6000 BCE is one of numerous Mesolithic artifacts discovered at Lepenski Vir, Serbia. The sculpture is thought to have been an object of worship.

One of the great developments of the Mesolithic period (c. 10,000–4000 BCE) was a proliferation of sculpted figurines of human and animal forms. Although figurative sculpture was not unknown before the start of the period—the Venus of Willendorf, for example, a 4.3-inch (11 cm) limestone figurine discovered in Austria in 1908, dates from around 24,000 BCE—most sculptures had previously been exclusively utilitarian (unadorned pots and drinking vessels).

“The increased production of female figurines and phallic images of stone … suggest[s] a rise in ritual practices.”

Heilbrunn Timeline of Art History, Metropolitan Museum of Art

The earliest such effigies were made in Japan during the Jomon period, which is roughly coextensive with the Mesolithic. Jomon pottery was made without the use of potters’ wheels (which date from around 4500 BCE), by coiling thin, rounded strips of wet clay into the requisite shapes and then firing them in low-heat ovens. The word Jomon means “cord marks.” Jomon pottery began as simple jars and bowls, but later developed into statuettes known as dogu.

Around 15,000 dogu effigies are known to exist. They are hollow and made of clay in which red ocher is the dominant color. Most of them are between 4 inches and 1 foot (10–30 cm) in height and represent females with hourglass figures, large faces, and wide eyes (several of them appear to be wearing goggles). Some of the women are depicted as pregnant, with protuberant abdomens covered in vermilion patterns. They are thought to have been fertility symbols.

In 1986 the remains of a Mesolithic temple dating from around 10,000 BCE were discovered near Sanliurfa in southeastern Turkey. Inside it were figurines created in the eighth millennium BCE depicting headless humans with outsize genitals, together with effigies of fawns, foxes, pigs, scorpions, snakes, storks, and wolves. Researchers studying these figurines have suggested that, rather than being important cultural or religious symbols as previously supposed, they may in fact simply have been toys. GL

c. 10,000 BCE

Slavery

Unknown

A form of servitude in which one person is the property of another

An ancient Egyptian carving from c. 1332–1323 BCE, showing prisoners being counted; slaves were often gained through military conquest, providing a cheap labor force.

The practice of slavery is believed to have originated with the development of agriculture in around 10,000 BCE. Agriculture required a labor force, and enslaved prisoners of war provided a cheap and convenient means of creating one.

“Slavery they can have anywhere. It is a weed that grows in every soil.”

Edmund Burke, statesman

Slavery was legal throughout the ancient world. Slaves were typically acquired by military conquest or by purchase from foreign lands. Their conditions were not always harsh: in ancient Greece and Rome, slaves could own property and run their own businesses—they had almost all the same liberties as free men, other than the rights to serve in the military and to participate in political affairs. Without slave labor, many civilizations would have been unsustainable. The Roman Empire (27 BCE–c. 500) depended on slaves for everything, from building construction to copying manuscripts.

Slavery flourished in Europe until the fourth century CE, after which it was superseded by serfdom. However, slavery re-emerged in the fifteenth century with the opening-up of Africa and the discovery of the New World. During this time, large numbers of Africans were shipped to the West Indies and the Americas by Europeans to work as slaves in mines and on plantations.

Attitudes toward slavery began to change in the eighteenth century, however, and an abolition movement was formed by people campaigning against the cruelty and injustice of the practice. In 1792 Denmark abolished slavery, and over the next hundred years many other nations followed suit.

Nonetheless, it was not the end of slavery. The practice was used extensively during the history of the Soviet Union in the form of the gulags (forced labor camps) and regularly recurs in times of war. In the late twentieth century there emerged a new form of slavery known as debt bondage, in which employers charge workers so much for food and shelter that the exploited people can never fulfill their obligations. GL

c. 10,000 BCE

Agriculture

Unknown

Cultivating naturally occurring crops or livestock for use as food or raw materials

A 7,000-year-old painting in the Spider Cave in Valencia, Spain, showing a figure farming honey.

The first sustained agricultural efforts occurred around 10,000 BCE in the Fertile Crescent, an area of the modern-day Middle East that includes the Tigris and Euphrates rivers, the Levant, and the Nile River delta. Agriculture also arose independently in China in around 8000 BCE, and in the Americas before 3000 BCE. Early farmers learned to take wild plants, such as rye, chickpeas, and flax, and plant them for harvest, thereby reducing the need to travel to new locations to find food sources. The domestication of animals provided additional sources of food, products, and labor.

Several theories have been put forward as to why humankind made the switch to agriculture. One argument is that it was a means of coping with a crisis of overpopulation after the development of sedentism. Another theory posits that climate change at the end of an Ice Age led to the spread of forests, segmenting previously open ranges. This encouraged sedentism and territoriality, which led to the protection and propogation of local food resources.

“I had rather be on my farm than be emperor of the world.”

George Washington, U.S. president 1789–97

Agriculture forever changed the way in which humanity sourced food. By the Bronze Age (c. 3500–1000 BCE), Middle Eastern civilizations obtained the majority of their dietary needs from farmed food supplies. While this reliance upon agricultural production for food has produced sometimes disastrous famines and negative ecological consequences, it has also allowed the human population to expand tremendously. In the twentieth century, advances in farming techniques led to a massive increase in crop yields, spurring a population boom that is still ongoing. MT

c. 9000 BCE

Alemaking

Unknown

The art of brewing the world’s most ancient and popular alcoholic beverage

Many speculate that alemaking began in around 9000 BCE, because this would correspond with the appearance of the first cereal farms. In the East, the ancient Chinese had a kind of rice beer called kui, which they began brewing around 7000 BCE; in the West, the ancient Mesopotamians were manufacturing beer at the Godin Tepe settlement at the beginning of the Bronze Age (c. 3500–1000 BCE). The oldest literary sources for alemaking are Mesopotamian—The Epic of Gilgamesh (c. 2000 BCE), The Hymn to Ninkasi (c. 1800 BCE), and The Code of Hammurabi (c. 1772 BCE)—all of which indicate that alemaking and drinking were occurrences of daily life.

Beer has a long history and has played a small but important part in human life. The ancient Egyptians spoke of the gods and the blessed enjoying “the beer of everlastingness,” whereas the Mesopotamians wrote hymns to Ninkasi, the goddess of alcohol, thanking her for the drink. Historians have even theorized that humankind’s fondness for beer and other alcoholic beverages was a factor behind the move to an agrarian society from a hunter-gatherer one.

“When you pour out the filtered beer … it is like the onrush of Tigris and Euphrates.”

The Hymn to Ninkasi (c. 1800 BCE)

There are roughly four steps to alemaking. First, there is the “mashing,” which involves mixing a starch source, such as malted barley, with hot water. Second, the resulting mixture or “wort” is collected in a (copper) kettle, where some of the water is allowed to evaporate and hops are added. Third, the mixture is cooled and yeast is added. Finally, the liquid is stored in a cask or keg and allowed to sit before being imbibed. AB

c. 7500 BCE

Dyeing Cloth

Unknown

Changing the colors of fabrics and cloth by adding artificial dyes

Dyes have been a part of human history for millennia, and appear to have been invented independently in numerous different cultures. The earliest known use of dyes reaches back to about 30,000 BCE, when red ocher was used to decorate burial mounds in northern Italy. The oldest evidence for textile dyeing comes from the Neolithic settlement of Çatalhöyük (c. 7500–5700 BCE) in modern Turkey. By 3,000 BCE, more advanced dyeing procedures existed in India, where dyes were affixed to fabric with the use of mordants (substances that chemically affix the dye to the fibers of the cloth). Early dyes came exclusively from natural sources, such as plants, roots, tree bark, and insects.

It is unclear whether the first dyed fabrics had a significance beyond being merely decorative, but over time they became a key indicator of the wearer’s wealth and social status. In around 1500 BCE the ancient Phoenicians developed a dye known as Tyrian purple, a color that for centuries was reserved exclusively for the garments of kings, emperors, and high priests. When Alexander the Great conquered Persia in 331 BCE, he found purple robes in the capital’s treasury that would have been worth millions of dollars today.

“The soul is dyed the color of its thoughts.”

Heraclitus, ancient Greek philosopher

Before the introduction of dyes, humanity was relegated to using the colors that existed in natural fibers. But many colors found in nature were not found in the products humanity could make, and the desire to control and wear these colors was strong. With dyes, an explosion of color entered the world, allowing textile producers to transform otherwise drab fabrics into vibrant, colorful cloths of beauty and value. MT

c. 7000 BCE

Simple Machines

Unknown

The development of basic devices that modify motion and force to perform work

The wheel and axle were used in Sumerian chariots, depicted here in the Standard of Ur (c. 2500 BCE).

The ancient Greeks were first to define simple machines—devices that change the magnitude or direction of a force to make a task easier—but most of them had exised for millennia. Historically, there are six types of simple machine: the lever, wheel and axle, pulley, inclined plane, wedge, and screw.

“Give me a lever long enough … and I shall move the world.”

Archimedes, ancient Greek mathematician

The inclined plane was most probably the first, used by proto-humans to slide heavy weights (though there is no hard evidence to prove this). Next came the wedge, which—by creating a sideways force when pushed into something—enabled the user to split materials such as rocks and wood. The earliest examples of such tools, found in India and Mesopotamia, date to around 7000 BCE. The first documentation of the wheel and axle comes from the Sumerian city-state of Ur, from around 3500 BCE. Its earliest use was probably in raising water from wells, but it also led to the development of horse-drawn chariots. The pulley, by necessity, followed the wheel—being a wheel with a rope wrapped around it—and it was used to raise and lower objects. It is likely that pulleys were used in the building of the Great Pyramid of Giza in c. 2560 BCE. Levers are also believed to have been used in ancient Egypt, but it was Archimedes (c. 290–212 BCE) who first described the principle of using one. The screw—which allows a rotary motion to be converted into a forward or backward one—appeared in ancient Greece. Sometimes credited to Archytas (c. 428–c. 350 BCE), it was used in wine and olive presses. These simple machines enabled us to push beyond our natural capabilities and laid the foundations for innumerable technological advances. GL

c. 6000 BCE

Human Sacrifice

Mesopotamia

An ancient exchange between a civilization and its god(s)

A sixteenth-century Aztec painting of a human’s heart being offered to the war god Huitzilopochtli.

Human sacrifice involves the killing of one or more humans in order to serve a particular religious or ritualistic purpose. Some of the earliest evidence for the practice of human sacrifice dates from 6,000 BCE, among the Mesopotamian peoples of the ancient Near East. The Mesopotamians were among the first cultures to develop a practice of retainer sacrifice, in which the slaves and servants of royalty and nobility were killed at the time of their master’s death. In carrying out this action it was believed that they would continue to serve their master as courtiers, guards, and handmaidens in the afterlife. A similar practice is also known to have existed at this time in ancient Egypt.

Perhaps more commonly, human sacrifice has been employed throughout history as a means of exchange or trade between a community and its god or gods. In this context, human life is offered as a form of appeasement, typically in exchange for the protection of lands or good fortune in war. Greek legend, for example, tells of Agamemnon’s intentions to sacrifice his daughter in exchange for success in the Trojan War (c. 1200 BCE). In addition, human sacrifice has been used on a large scale at the completion of religious buildings such as the Great Pyramid of Tenochtitlan in Mexico, for which the Aztecs are documented to have sacrificed between 10,000 and 80,000 prisoners in 1487.

Human sacrifice gradually became less prevalent over time and is extremely uncommon as a practice in the modern world—it is defined in legal terms as “ritual murder.” However, the ongoing study of human sacrificial practices and increasing amounts of archaeological information uncovered continue to reveal important insights into the behaviors and customs of our ancestors. In this way, the idea of human sacrifice continues to inform and influence our present-day religious and moral codes. LWa

c. 6000 BCE

Winemaking

Ancient Georgia

Creating a fermented drink from cultivated grapes

An Egyptian wall painting (c. 1400–1390 BCE) demonstrating the method for pressing grapes for wine.

Unlike other types of plants that can be used to create alcoholic beverages, grapes do not need additional nutrients to begin fermenting. When yeast is added to grape juice, or is naturally present in the environment, it will transform the naturally occurring sugars into alcohol, thus creating wine.

Although wild grapes and fruits had long been available to early hunter-gatherers, the first evidence of winemaking comes from Neolithic sites in Georgia dated to about 6,000 BCE. Ancient people there first cultivated the naturally growing Vitis vinifera grape and fermented the fruit into wine, adding preservatives to allow them to store the drink for longer. Written cuneiform records show that ancient Sumerians in the city-state of Ur had fairly advanced winemaking practices by 2750 BCE, and by 2500 BCE winemaking had spread to western Mediterranean cultures. Wine’s place in ancient societies was largely confined to the upper echelons of society, but beginning in c. 500 BCE in classical Greece, wine’s popularity exploded and it subsequently became widely available to the masses in addition to the aristocracy. Winemaking technology advanced greatly during the time of the Roman Empire (27 BCE–476), with improvements to the design of the wine press and the development of barrels for storing and shipping.

Ever since its earliest days, wine has proven to be much more than merely a fermented drink. People have used it in religious ceremonies, heralded it for its medicinal qualities, and traded it as a valuable commodity. Before effective water treatments became available, wine gave people a way of drinking water relatively safely, by using diluted wine to kill naturally occurring pathogens. As a popular drink, social lubricant, and object of admiration and value, wine continues to hold a strong place in the modern world, with worldwide production exceeding 6 million gallons (23 million liters) in 2010. MT

c. 5000 BCE

Ghosts

Mesopotamia

The existence of disembodied human souls or spirits

The “Great Ghost and Attendants” rock painting (c. 3000–2000 BCE) at Canyonlands National Park, Utah.

Since the early days of modern civilization in ancient Mesopotamia, around 5000 BCE, people have believed in ghosts. While most religious traditions are unclear on how the human soul or spirit relates to the human body, the distinction between the two—with the human soul being in some sense the “true person”—has rarely been questioned. Although “ghost” and “human soul” are often thought to be synonymous, they are not quite the same. While “human soul” is a positive term, “ghost” is, generally speaking, a negative one, indicating the unnatural state of the disembodied human soul.

From Mesopotamia to Japan, there has been a strong sense that the human soul is not supposed to be disembodied, though it can be, and so there has been a kind of universal horror at the thought of it being disembodied. Gods and angels may function normally without bodies—or at least bodies in our sense—but humans do not. Homer’s Odyssey (c. 850 BCE) offers a typical example of this when it shows the ghosts in the underworld hungry for blood, since blood and flesh are what make a human. Without blood or—better—bodies, ghosts are foggy-minded and incompetent.

“Now the souls of the dead who had gone below came swarming …”

Homer, Odyssey (c. 850 BCE)

Nevertheless, in shamanistic traditions, ghosts have often been consulted for knowledge or advice through the practice of necromancy. Abrahamic religions consider this an evil practice, presumably because the disembodied soul or ghost, dwelling in the underworld, could only be contacted via spirits of the underworld—demons—and so the whole enterprise would be tied up with demonic or evil activity. For these religions, the ghost condition is meant to be healed. AB

c. 5000 BCE

Creation Myth

Unknown

A story that explains how humanity, life, and the universe came into existence

Creation myths, offering a traditional explanation for how the universe originated, exist across cultural boundaries and come in myriad forms, detailing the creation of everything from humans to animals to natural phenomena. They offer a range of explanations: stories of creation by a powerful maker (such as the Abrahamic religions), of the world being created from a primordial mother and father (such as the Babylonian myth “Enuma Elish”), origin from an ancient cosmic egg (found in the Hindu tradition), creation by divers who pull the earth out of a body of water (common in Native American folklore), and many others.

As humans evolved into modern Homo sapiens and their capacity to think and reason expanded, their thoughts turned to questions about their origins and how the world came to be. It is impossible to know when the first such tale was created as they all originate from prehistoric times and existed only as oral stories. However, the first known myths come from ancient Mesopotamia, dating back to around 5000 BCE. Many of the myths share common themes, images, or motifs, such as the existence of divine beings or spirits. They also reflect humanity’s understanding of nature, relating common experiences of birth, destruction, and death as they reflect on how it all came to be.

“Creation myths are the deepest and most important of all myths.”

Marie-Louise Von Franz, Creation Myths (1972)

Before scientific inquiry allowed humanity to shed light on the natural world, creation myths provided both answers and cultural foundations. Societies and religions identified themselves by such stories, and these cultural bonds and shared religious identities remain today as both unifying and destructive forces. MT

c. 5000 BCE

Miracles

Mesopotamia

Extraordinary occurrences that are attributed to a divine power

Humankind has believed in miracles since the rise of organized religion from c. 5000 BCE. Some religious traditions, such as shamanism, do not make a strong distinction between the supernatural and natural, and so the thought of the supernatural doing something irregular is unimportant. Similarly, in a pantheistic religion such as Hinduism, the natural world is viewed as illusionary, making the concept of miracles redundant. However, in theistic or polytheistic religions, in which a god or gods are thought to exist, miracles are to be expected because humans will not be able to understand all of the deity’s actions.

Materialists or Naturalists reject the concept of miracles, however, and embrace “methodological naturalism,” which demands that no supernatural agents be considered when investigating a particular unexplained phenomenon. Theists typically respond to this by asking whether we have good reason to believe that God and other supernatural creatures exist. If we do, then the universe is therefore not a closed system and we are rationally justified in believing that miracles can occur—though this, by itself, will not help us to know whether a particular claim to the miraculous is true.

“And seeing signs and miracles, he was greatly amazed …”

The Bible, Acts 8:13

Today, the term “miracle” is often used in the sense of describing something that has happened against all odds, rather than carrying specific religious connotations. Nevertheless, for many modern theists, miracles not only highlight God’s concern for humans, but also remind them of an unseen world of angelic and demonic causation that shapes human history. AB

c. 5000 BCE

Evil

Mesopotamia

The concept of an act carried out with deliberately harmful intent

A relief showing Assyrian warriors impaling prisoners during the siege of Lachish in 701 BCE.

Ever since people were able to think and understand what was going on in the world, humanity has understood that pain, suffering, and destruction exist. There is no single definition of evil used around the world, though all languages have words to describe what is wanted, moral, or “good,” as well as that which is unwanted, immoral, or “bad.” In the earliest recorded religion, that of the ancient Mesopotamians which originated in around 5000 BCE, the concept of evil was well known. To the Mesopotamians, evil demons caused strife and suffering, and religious rituals and exorcism were available that could protect people against evil. The idea of being punished for evil deeds in the afterlife originated with the Zoroastrian religion in the second millenium BCE, in which sinners were believed to be sent for eternity to the House of Evil and the just were believed to be sent to the House of Song.

“There has to be evil so that good can prove its purity above it.”

Siddhartha Gautama (Buddha)

“Evil” is a word that typically applies when a person acts with malice. Evil acts are immoral acts, the worst actions in which a person can engage. The idea of evil is, in many cultures, vital to the understanding of right and wrong. It serves as a basis for moral judgments and religious doctrines, and is instrumental in the creation and administration of laws and criminal justice. In some cultures and religious traditions, evil is personified in the form of a spirit or malevolent force, while in others it exists as suffering, or as the result of humanity ignoring divine guidance. Across cultures and perspectives, reflections on the concept of immorality, wickedness, and evil shape how people view the world in terms of morality, the divine, and human nature. MT

c. 5000 BCE

Leisure

Ancient Egypt

Rest and relaxation as an integral feature of the social hierarchy

Ancient Egyptian nobles listen to a harpist in this mural found in the Tomb of Anherkha (c. 1187–1064 BCE).

The idea of leisure is inextricably linked to the emergence of a distinction between work and play. Some of the earliest indications of the existence of leisure can be identified in prehistoric human societies. The shift away from a nomadic, hunter-gatherer existence for prehistoric peoples enabled the development of a more stationary lifestyle based on the growing of crops and rearing of animals. As such, early societies developed practices of land ownership, which ultimately led to the emergence of a social hierarchy. It was within the newly established social elite that the idea of leisure first emerged.

More concrete evidence for the existence of leisure, however, can be found in ancient civilizations such as that of ancient Egypt. Leisure played an important role in ancient Egyptian society, in which activities such as music, theater, and dance were performed for both religious and entertainment purposes. This blurring of the boundary between leisure and religious, ritualistic, or even political activities is characteristic of the historical development of leisure. A similar role for leisure activities is also seen in later civilizations such as Babylonia and ancient Greece. Sports such as boxing, wrestling, and archery all featured prominently as part of a program of recreation and relaxation.

By the turn of the twentieth century, the idea of leisure had become even more deeply instilled in modern Western society, with a burgeoning economy allowing more time for leisure and the state dedicating funds specifically for the development of public parks and recreation areas. Similarly, an important role is now assigned to the idea of “free” evenings and weekends. However, a significant discrepancy between access to leisure time and activities in the developing and developed worlds provides a key indicator of the inherent inequalities still associated with the idea of leisure today. LWa

c. 5000 BCE

Arranged Marriage

China

Marriage as a means of preserving cultural and religious harmony or homogeny

Detail from an eighteenth-century illustration depicting a Hindu marriage ceremony.

The idea of arranging a marital union between two people originated in Far Eastern cultures more than 7000 years ago as a means of ensuring cultural and religious harmony within families and communities. Typically, the arrangement would be made by parents or members of the community, such as religious leaders, who would negotiate a pairing along with any exchange of money or property to be made. Arranged marriage also emerged in the ancient world on the Indian subcontinent and in the Near East and at a similar time in African and South American cultures, demonstrating the pervasiveness of ideas such as homogeny and cultural conservatism throughout human history.

Although significantly more common in Eastern cultures, arranged marriages were also popular in Europe during the sixteenth and seventeenth centuries. Families in the upper classes and nobility would often arrange a union between children deemed to be well suited in terms of both social and financial standing. Similarly, arranged marriages within royal circles have been the norm for centuries, often motivated by the political need to extend or consolidate power. This tradition continues in a contemporary context: the union between Charles, Prince of Wales, and Lady Diana Spencer in the United Kingdom in 1981 reportedly did not originate in unalloyed romantic attachment.

Arranged marriages are still common in countries such as India, Pakistan, and Afghanistan, and global migration has led to its more recent emergence in places such as the United States. Technological advances have led to the creation of matchmaking websites for arranging marriages, and there are now conferences where potential spouses and their families can network. Despite its ancient origins, arranged marriage continues to be relevant in a contemporary world. LWa

c. 5000 BCE

Belly Dancing

Unknown

A dancing style characterized by movements of the hips and abdomen

A detail from a wall painting found in the tomb of the ancient Egyptian nobleman Nabamun, dating to c. 1350 BCE, showing dancers entertaining guests at a banquet.

The term “belly dance” describes a variety of folk dances that originated in Asia, Egypt, the Middle East, or India. The dancing features complex torso movements, gyrating hips, undulations, and coordinated movements of the hips, chest, arms, and legs. Many, if not most, forms of belly dancing primarily feature a female dancer, although males are also known to perform.

“It is the most eloquent of female dances, with its haunting lyricism, its fire, its endlessly shifting kaleidoscope of sensual movement.”

Wendy Buonaventura, Serpent of the Nile: Women and Dance in the Arab World (1989)

Belly dancing is a folk dance that has been passed down from generation to generation, so it is difficult to determine its exact historical origins. It is possible that this type of dancing originated in prehistoric times in cultures that worshipped mother goddesses and performed ritualistic dancing as part of their religious ceremonies. There is strong evidence to suggest that belly dancing existed in ancient Egypt, with painted images of belly dancers found in tombs dating to c. 5000 BCE.

In Western cultures, the term “belly dancing” is largely credited as originating in the World’s Fair in Chicago, Illinois, in 1893. Saul Bloom, the fair’s entertainment director, coined the term danse du ventre, French for “belly dance,” to describe the folk dances performed by dancers from Egypt and the Middle East. Prior to the popularization of the term, Western audiences were first exposed to “oriental” folk dances during the Victorian period, as a result of the rise of interest in Ottoman or “oriental” cultures.

To many modern audiences, belly dancing is a mostly erotic exercise in which the female dancer’s movements are designed to entice and excite. Belly dancing, unlike many other types of folk dance, is a solo, non-choreographed, improvisational act that draws attention to a woman’s abdomen. The movements of the dancers simultaneously venerate fertility and sexuality, which perhaps explains its nearly universal appeal. MT

c. 5000 BCE

Gymnastics

Ancient Egypt

The practice of exercising the body to develop physical agility and coordination

A Greek sculpture of a gymnast that dates back to the fourth century BCE. Gymnastics was a central component of ancient Greek education and was mandatory for all students.

The term “gymnastics” derives from the Greek gymnos, meaning “naked,” a reference to the fact that in ancient Greece most male athletes competed in the nude. This lingustic connection, and the fact that gymnastics is one of the oldest Olympic events, has led to the common belief that the sport originated in ancient Greece. However, there is plenty of evidence to show that the practice long predates the Greeks: Egyptian wall paintings dating from roughly 5000 BCE show acrobats performing gymnastics as entertainment for the nobility, and a wall painting of c. 2000 BCE depicts a young woman bending backward on all fours in a demonstration of flexibility; Minoan frescos in Crete, dating from around 2700 BCE, show acrobats vaulting off the backs of bulls (likely as part of a religious ceremony); while in China mass gymnastic-like exercises were practiced as part of the art of Wushu (a form of martial art) some 2,000 years before the Greek Olympics, which began in 776 BCE at Olympia in southern Greece.

“Everything is about your movements and precision and timing, which is what gymnastics is about.”

Shawn Johnson, U.S. gymnast

Modern gymnastics is credited to a German, Johann Friedrich Guts Muths (1759–1839). He developed a complete program of exercises intended to improve balance, flexibility, and muscular strength, based on his experience working at a progressive school. Gymnastics spread through Europe in the nineteenth century, primarily for military physical training.

Today, the sport encompasses artistic gymnastics (which requires special equipment such as a balance beam), rhythmic gymnastics (a style that combines elements of ballet and dance with the use of equipment such as a ribbon), trampolining, aerobics, and acrobatics. Gymnastics is an art of body contortion as much as a feat of strength and grace. The greater flexibility that can be developed through training, in addition to the power gained, reveals the true physical potential of the human body. KBJ

c. 4500 BCE

Megalithic Monuments

Unknown

Huge Neolithic and Bronze Age structures of undressed stone

The exact chronology of the spread of enormous stone monuments is unknown, but it is generally agreed that the earliest such structures, dating from around 4500 BCE, were the dolmens of the Mediterranean coast. A dolmen consists of several upright supports beneath a flat stone roof; the whole edifice—originally a burial site—was then covered with earth that has often since been eroded to reveal a stark and imposing megalith.

Later came menhirs, collections of upright stones arranged in circles, semicircles, ellipses, or alignments of several parallel rows. Menhirs occur most commonly in Brittany, France, and their name is derived from the Breton men (stone) and hir (long). The best-known menhirs, however, are probably those at Avebury and Stonehenge in England.

“Monuments … are effective and enduring means of communication.”

E. DeMarrais, J. M. Castillo, and T. Earle, anthropologists

How these vast stones were extracted from the earth, transported, and erected is one of the great unsolved mysteries of the ancient world: the largest sections of Stonehenge are 32 feet (10 m) high, weigh 45 tons, and were quarried 200 miles (320 km) from the site. Their purpose also remains obscure, although the similarity of the symbols found carved on many of the monuments suggests that they were used for religious ceremonies. They may equally have been expressions of triumphalism, however.

At the start of the Bronze Age in northern Europe (c. 2000 BCE), the emergent Beaker folk continued the megalith tradition, albeit on a reduced scale, by constructing round barrows (large stone mounds) for single burials. Meanwhile, monuments were also erected in Africa, Asia, the Americas, and Melanesia. GL

c. 4000 BCE

Heaven and Hell

Unknown

The dwellings of, respectively, the just and blessed, and the unjust and cursed

Rebel angels are cast into Hell in an illustration of Milton’s Paradise Lost by William Blake (1808).

The idea of an afterlife that consists of a rewarding dwelling in the heavens for the righteous and a cursed dwelling in the underworld for the unrighteous is an extremely ancient and global one. There is evidence that the Mesopotamians (whose culture originated in c. 4000 BCE) believed that most of their gods dwelt “above,” while the souls of the dead went down to the underworld—a place of intense heat, darkness, and sorrow. Similarly, the ancient Egyptians believed that the dead descended to the underworld, where they were judged for their actions; if they were deemed just, then they could climb the ladder to the heavens, and if unjust, they were devoured by the crocodile-monster Ammit.

“Then I saw a New Heaven and a New Earth … coming down from God …”

The Bible, Revelation 20:14; 21:1, 2

There is remarkable harmony of thought between the ancient polytheistic and the monotheistic Abrahamic religions in this view of the afterlife as split between “Heaven” and “Hell.” However, these terms are often misunderstood, especially in Christianity. According to the Bible, God transcends all locations, and so while God is often spoken of as dwelling in Heaven, Heaven itself is a created, temporary place where the angels and righteous humans dwell. This contrasts with “Hell,” a created, temporary place where damned angels and unrighteous humans go. Ultimately, however, the book of Revelation talks about God destroying Hell in a “Lake of Fire,” and then “creating a New Heaven and a New Earth”—the New Heaven for the angels, and the New Earth for humans. As the two destinations for the soul when the body dies, Heaven and Hell continue to motivate pan-religious beliefs, actions, and art. AB

c. 4000 BCE

Calendar

Ancient Egypt

Ordering the years, months, and days according to the sun, moon, and seasons

The Aztec calendar stone (c. 1479) reflects the Aztec view of time and space as wheels within wheels.

The term “calendar” derives from the Latin calendarium or calendra, meaning “account book,” and kalendae, referring to the new moon and first day of the Roman month. A calendar is a system of ordering “time” in what is ordinarily an annual cycle, divided and subdivided according to the annual revolution of the Earth around the sun, the seasons that this causes, and the positions of the moon. The most common calendrical period (beyond the distinction between day and night) is the lunar month (the full cycle of the phases of the moon), which lasts about 29.5 days. All the major early centers of ancient civilization kept calendars, including Mesopotamia, the Indus and Nile valleys, eastern China, Mesoamerica, and the Andes.

Among the earliest calendars are the Egyptian calendar (traceable as far back as 4000 BCE), the Chinese calendar (mythically said to have been created in 2637 BCE, with historical evidence stemming as far back as the fourteenth century BCE), the Mesoamerican calendar (stemming back perhaps as far as 1000–900 BCE), the Indian calendar (the basic principles of the Vedic calendar can be traced to within the first millennium BCE), and the Japanese calendar (dating from around 660 BCE). A commonly used calendar throughout the world (part of the legacy of European colonialism) is the Gregorian calendar, which was developed from the first century BCE Julian calendar. The Gregorian calendar gets its name from Pope Gregory XIII, who dictated the need for changes to the Julian calendar (itself a reform of the Roman calendar), and has been adopted by many countries since the sixteenth century.

The calendar, as a concept, has been so essential to the organization of civilization, religion, agriculture, politics, social affairs, and other aspects of human society that the story of the calendar is almost as old as the story of civilization itself. JE

c. 4000 BCE

Flat Earth Myth

Ancient Egypt

The ancient but erroneous belief that the Earth is flat

A world map of the flat Earth, printed by Beatus Rhenanus in the early sixteenth century.

The ancient Egyptians and Mesopotamians were probably the first to believe, in c. 4000 BCE, that the world was a flat circular surface, though this notion was fairly ubiquitous in antiquity—understandably so, given that the science needed to measure the Earth had not yet been discovered. In the West, the best-known instances of a flat, circular surface are from either the Greek poet Homer (fl. c. 850 BCE), who speaks of “the circular surface of the Earth,” or the biblical Psalmist, who sings of God sitting “enthroned above the circle of the Earth.” In the East, Hindu, Jainist, Buddhist, and Shinto scholars believed likewise, usually speaking of the Earth as floating on water, with the heavens or sky umbrella-like overhead.

The idea of a flat, circular Earth was common in the ancient world, but once the Greeks—arguably starting with Pythagoras (c. 570–c. 495 BCE), though certainly with Ptolemy (c. 100–c. 170 CE) a few centuries later—devised the technology necessary to measure latitude, longitude, and climes, the flat Earth thesis began to diminish (at least in the West). For example, the Christian philosopher Boethius (c. 480–524/25), whose Consolation of Philosophy (c. 524) was arguably the most important book (after the Bible) in the Christian West, argued that the Earth was spherical. Why, then, is it sometimes thought that the Christian West, even at the time of Columbus, thought the world flat?

Although the Renaissance and Enlightenment were largely Christian philosophical-scientific projects, some of these thinkers were hostile to the Bible and were eager to dismiss it as myth since, among other things, its poetic utterances about a disk-shaped Earth were scientifically flawed. This kind of enlightened propaganda created in the popular imagination the false, but influential, notion that most in the Christian West thought the world flat. Today, the Flat Earth Society exists for anyone interested in the theory. AB

c. 4000 BCE

Feng Shui

China

Arranging one’s surroundings to encourage harmony and enhance quality of life

A bronze feng shui compass from China’s Spring and Autumn Period (770–476 BCE). The feng shui compass, or Lo-Pan, is used to define the bagua (energy map) of any given space.

Feng shui is the Chinese art of arranging and moving the external built environment—anything from buildings and gardens to the furnishings and objects within them—to maximize that environment’s harmony and balance, and optimize the flow of energy (known as qi or chi) through and around it. Feng shui was derived from the Chinese concepts of yin (positive forces) and yang (negative forces), and evidence suggests that it may have been practiced in some form for thousands of years. The earliest believed examples of the practice date back to c. 4000 BCE, and include dwellings and graves that have been aligned according to certain astronomic principles.

“Both heaven and earth influence all living beings … it is in your hands to turn this influence to the best account for your advantage.”

Ernest Eitel, Feng Shui (1878)

One early approach to locating qi was based on the scheme Wu Xing, the five phases, which conceptualizes qi as alternating between yin and yang as it progresses through the Earth’s five elemental phases—metal, earth, fire, wood, and water. In the Wu Xing system, every calendar year and every direction of the compass is allocated its own elemental designation. Therefore, each individual may harmonize their personal qi by orienting their home, workplace, and even their grave to directions compatible with their year of birth. Numerous additional rules govern the successful arrangement of patterns of objects, but as a general philosophy of design, orientations appropriate to feng shui are said to promote welfare, increase creativity, facilitate better interpersonal relationships, aid contemplation, and reduce stress.

The principles of feng shui were largely unknown in Europe and the United States until the late nineteenth century. The German Protestant missionary Ernest Eitel (1838–1908) published a study of feng shui in 1878, titled Feng-shui: The rudiments of natural science in China, but it was not until perhaps the last quarter of the twentieth century that the wider public became familiar with the ancient Chinese system. JE

c. 3500 BCE

Wind Power

Ancient Egypt

Converting the energy of the wind into a useful source of power

A tenth-or eleventh-century BCE papyrus with an illustration from the ancient Egyptian Book of the Dead, showing a deceased person sailing through the underworld.

The concept of harnessing the wind to provide power was first put into practice in ancient Egypt in c. 3500 BCE, with the introduction of sails to propel boats. Previously, boats were powered by groups of rowers, and thus were limited in how far and how fast they could travel. The development of square sails fashioned from papyrus meant that boats could travel farther and faster, with a smaller crew. This had a significant impact on trade, as it not only sped up the process of transporting goods but also enabled boats to become bigger—by c. 1200 BCE, the Phoenicians were using 80-feet-long (24 m) wooden cargo vessels with large cloth sails.

“The air when set in motion becomes wind (for wind is nothing else but air in motion) …”

Heron of Alexandria, Pneumatica (first century CE)

The first example of the wind being used to drive a mechanical device is generally considered to be the windwheel of the Greek mathematician and engineer Heron, or Hero, of Alexandria (c. 10–70 CE). In his Pneumatica, Heron described his innovations to the hydraulis, a water organ originally designed by Ktesibios of Alexandria (fl. c. 270 BCE). The original mechanism forced air through pipes to sound notes depressed on a keyboard. Pressure in an air chamber was kept constant by using water to weigh down on it. Heron improved the valve that released the air into the sounding pipe, and replaced the water with a wind turbine that could be moved to catch the prevailing wind in order to maintain the necessary air pressure.

The origins of perhaps the best-known wind-powered mechanical device, the windmill, are much debated, but there is reliable evidence that it was in widespread use in Persia by the seventh century. Windmills became a common tool for pumping water and grinding corn and grain across Europe and Asia, and remained so until the nineteenth century. In the twentieth century wind power became widespread as a means of generating electricity, and today wind turbines are one of several renewable energy sources that offer an alternative to fossil fuels. DM

c. 3500 BCE

Sundial

Ancient Egypt

Using the shadow cast by the sun to reveal the time of day

Sundials are the oldest known instruments for telling the time of day. The earliest known sundials were ancient Egyptian obelisks created around 3500 BCE, although the oldest surviving example, also Egyptian, was made relatively recently, in c. 800 BCE.

The key component of any sundial is the gnomon, a vertical stick or pillar that casts a shadow that moves as the sun crosses the sky from east to west. Beneath the gnomon is a flat surface, the dial plate, usually marked with numbered lines showing the hours of daylight. It is the shadow of the gnomon on the dial plate that indicates the time of day.

As sundials were adopted more widely across the ancient world, they became increasingly sophisticated: their dial plates were inscribed with different sets of numbers to reflect the varying lengths of the day in each season. This system assigned twelve hours to every day, so each hour in high summer could be three or four times longer than each hour in midwinter. The earliest example of this type of sundial is attributed to the Greek astronomer Aristarchus of Samos (c. 310–230 BCE).

“Sundials tell ‘sun time.’ Clocks and watches tell ‘clock time.’”

Sundials.co.uk

In the Middle Ages, sundials were taken up with great enthusiasm by Muslims, who marked their dial plates with the times for daily prayers, sometimes to the exclusion of the hours of the day. But after the emergence of clockwork in the fourteenth century, sundials gradually fell from favor. They retained their value, however, being used until the nineteenth century to reset mechanical timepieces. Since 1955, atomic clocks have given the official world time. GL

c. 3200 BCE

Pictograms and Alphabet

Sumeria

The building blocks for the evolution of written communication

A tablet with cuneiform writing from the ancient Sumerian city of Uruk, dating back to c. 3200 BCE.

Pictograms—pictorial symbols designed to express meaning—represent the earliest known form of written communication. One of the first pictographic systems, known as cuneiform, emerged in around 3200 BCE among the Sumerian people of the ancient Near East. This intricate script, consisting originally of more than 1,000 symbols, endured for more than 3,000 years and influenced the development of many subsequent pictographic systems in the ancient world.

Despite its importance, however, cuneiform was eventually supplanted by the more efficient Phoenician alphabet (c. 1100 BCE), which consisted of only two dozen distinct characters based on the basic consonant sounds. Transported throughout the Mediterranean by Phoenician merchants, the alphabet was adopted in ancient Greece, where it was modified to incorporate vowel sounds—thereby creating what is generally considered the world’s first complete alphabet.

“Until writing was invented, man lived in acoustic space: boundless, directionless …”

Marshall McLuhan, The Medium Is the Massage (1967)

The ancient Greek alphabet (c. 1000 BCE) is also the earliest ancestor of the Latin alphabet (c. 600 BCE), which is the most common alphabet in use today. In line with the Greek system, the Latin alphabet is based on the use of a distinct number of basic consonant and vowel sounds (known as phonemes). Significantly, this enables the translation of words from one language into another; an innovation that continues to have an important impact on modern communication. However, despite the advantages of alphabetic systems, pictograms are still widely used in the modern world—something you are sure to notice next time you are driving on unfamiliar roads or visiting a public bathroom. LWa

c. 3000 BCE

Cremation

Unknown

The practice of disposing of dead bodies by burning them

The funeral pyre of Indian political and spiritual leader Mahatma Gandhi in Delhi, India (1948).

The practice of cremation—the incineration of a dead body—began in c. 3000 BCE, most likely in Europe and the Middle East. Cremation is well known to be a key feature of cultures in India, but its introduction on that subcontinent was relatively recent, dating from 1900 BCE.

From c. 1000 BCE the ancient Greeks burned the bodies of soldiers killed in action on foreign soil so that their ashes could be repatriated to their native land. Thus associated with heroes, cremation became regarded as the most fitting conclusion to a life well lived. It remained a status symbol in ancient Rome until the rise of Christianity from the first century CE, which taught that the dead would rise at the end of the world. This persuaded converts to bury their dead, so their bodies would still exist on the day of judgment.

“ … he is cremated … The swan of the soul takes flight, and asks which way to go.”

Sri Guru Granth Sahib, Sikh scripture

Cremation thereafter became unfashionable and in some countries forbidden. One of the principal nonreligious objections was that it might conceal foul play. Attitudes changed in the late nineteenth century, partly because of the publication in 1874 of Cremation: The Treatment of the Body After Death, a book by Queen Victoria’s surgeon, Sir Henry Thompson. In Japan, cremation was legalized in 1875; the first U.S. crematorium was opened in 1876; and in 1884, English courts ruled that it was permissible to dispose of human corpses in this way.

Cremation is now firmly reestablished in most countries: in Japan it is almost universal; in Britain and Germany more than 50 percent of dead bodies are cremated. Only the United States bucks the trend: more than 90 percent of Americans are still interred. GL

c. 3000 BCE

Judgment Day

Ancient Egypt

A day of reckoning on which people are divinely judged on their morality

Judgment day is commonly understood to refer to a day in the future when individuals will be judged on the basis of the morality of their actions by a divine authority. The idea of a day of judgment can be traced back to the ancient Egyptians in the third millennium BCE. The Egyptians believed that upon death a person’s soul would enter the underworld and arrive at their personal judgment day in the Hall of Two Truths. From yhere, a good person’s soul would proceed to a blissful afterlife, while an evil soul would be sent to the Devourer of the Dead.

The idea of a day of judgment also emerged in later religions, and is still in place in modern belief systems. However, in contrast to the Egyptian story, judgment day is now taken to refer to a specific day on which the whole of humanity will be judged. The earliest eschatology of this kind is found in the Zoroastrian religion, which emerged around 1500 BCE. Here, judgment day serves as the precursor to a perfect state of the world in which all evil has been eradicated.

“The Day of Judgment is an important notion: but that Day is always with us.”

Alfred North Whitehead, mathematician

This notion of a heavenlike world emerging after the final judgment of all peoples is a familiar feature of the major Abrahamic religions. Judaism posits the “End of Days,” followed by a Messianic Age of Peace, and much of the Islamic Koran is concerned with the Qiyamah (Last Judgment) and subsequent treatment of the righteous and unrighteous in heaven or hell. The idea of judgment day has also attracted attention from secular society, showing that the concept of a final judgment penetrates not only across different faiths and times, but also deep into the human psyche. LWa

c. 3000 BCE

Perfume

Indus Valley Civilization

The use of pleasant-smelling substances on the body

An Assyrian perfume burner from the thirteenth century BCE. A flame placed inside the burner would heat incense in the well at the top, creating a fragrant smoke.

Human beings produce natural odors, some pleasant, some less so. Nature is full of attractive odors—produced by a wide range of herbs and flowers, but also by specific animals—and it did not take people long to commandeer naturally fragrant materials in order to make themselves smell more pleasing. Modern perfumes are created from combinations of various synthetic and natural compounds, oils, and additives, but before these were available, perfume makers collected, combined, and refined natural compounds to create scents for the use of both men and women.

“ … Take unto thee sweet spices, stacte, and onycha, and galbanum; these sweet spices with pure frankincense: of each shall there be a like weight: And thou shalt make it a perfume, a confection after the art of the apothecary, tempered together, pure and holy …”

The Bible, Exodus 30:34–36

The earliest known evidence of the use of perfume is from the Indus Valley Civilizations, where people stored scents in terracotta pots as early as c. 3000 BCE. Mesopotamian culture had an advanced perfume-making industry by around 2000 BCE, and from c. 700 BCE the ancient Greeks made perfumes available not only to cause the owner to smell good but also to treat specific illnesses, to attract a sexual partner, and even to clarify addled thoughts. In the first century CE the city of Rome alone imported 2,800 tons of frankincense and 550 tons of myrrh each year to meet the people’s desire for perfume, and both fragrances became important in religious practices, in the form of incense.

The human sense of smell is a powerful one, capable of forming some of our strongest memories and associations, as well as producing changes in mood, emotion, and other physiological responses. With the creation of perfumes, humanity was able to stimulate and shape the sense of smell to produce a desired response. Perfume became a means, not only of controlling the environment, but also of exercising control over our emotions directly through our senses. The use of perfume in its various guises remains as popular as ever in modern times, with the industry at the beginning of the twenty-first century having a value of about $10 billion. MT

c. 3000 BCE

Ma’at

Ancient Egypt

A personified principle of rightness and regularity in the universe

An ancient Egyptian painting of the goddess Ma’at (c. 1600–c. 1100 BCE), shown wearing her customary ostrich feather. The ostrich feather was used as a symbol of truth.

Although the first mention of Ma’at—the ancient Egyptian principle of truth, justice, and regularity in the universe—is in the Pyramid Texts (c. 2375–2345 BCE), the general concept is much older. Like the Persian Asha, the Hindu Rta, the ancient Chinese Dao, and the Stoic Natura, Ma’at is the divine standard and ordering principle of creation, standing in opposition to both spiritual and physical chaos. The Egyptian concept of chaos, called Isfet, is similar to the “waters” of the Mesopotamian and biblical traditions, which God separated and held back in order to create the world. And just as the Bible metaphorically speaks of God’s ordering wisdom in the feminine (“Wisdom … She …”), so too was Ma’at often depicted as a goddess who brings order and justice to the universe. In Egyptian mythology, she was viewed as being responsible for regulating the stars and the seasons, as well as the actions of both mortals and gods.

“I have not committed sin or injustice … I have not transgressed against Ma’at.”

Papyrus of Ani (c. 1250 BCE)

Although a principle as broad as Ma’at can be seen everywhere, she was most often viewed on the judgment throne. As the principle of truth and justice, Ma’at acted as the judge or advisor to the judge of the underworld. Her task in this role was to determine whether a soul was just or unjust by weighing the heart of the dead person against a feather. If the scale balanced, then the deceased was allowed to continue on to the afterlife; if the heart was heavier than the feather, then the deceased was deemed not to have followed the principles of Ma’at during their life and their heart was eaten by a demon. In politics, the pharaohs were often called “the lords of Ma’at” because they were supposed to keep order in society as rulers answerable to the divine. Although the term “Ma’at” was probably of limited influence outside of Egypt, the concept—by whatever name—is central to how most societies understand order, especially moral order, and regularity. AB

c. 3000 BCE

Numerology

Ancient Egypt

A mystical connection between numbers and the world

Numerology appeals to a divine or mystical connection between the properties of numbers and aspects of human life. Ancient intrigue in the power of numbers can be identified in many early cultures, notably that of ancient Egypt from c. 3000 BCE. Depictions of the Egyptian mythological figure of Seshat, the goddess of both mathematics and astrology, have survived in major temples throughout Egypt, offering an insight into the interplay between the mathematical and the mystical in Egyptian culture.

The study of numbers and their relation to the world was also adopted in later cultures and was prominent in early Greek philosophy. The Greek philosopher Pythagoras (c. 570–c. 495 BCE) was particularly influential in this respect, developing several mathematical principles that are still in use today. The principles’ bases are at least arguably numerological, given the mystical approach to the properties of numbers that Pythagoras employed in their establishment.

“Numbers are the universal language offered by the deity to humans …”

St. Augustine of Hippo, Christian philosopher

Modern numerology draws on many aspects of the ancient mystical treatment of numbers in order to analyze personality types or predict future events. Pythagorean numerology, for example, developed in the 1970s, appeals to the Pythagorean idea that “all is number.” Although today this is largely discredited as a pseudoscientific practice, its ancient origins are responsible for certain persistent superstitions. In modern Chinese culture, for example, even numbers are considered luckier than odd, while many Westerners will be familiar with an unaccounted for aversion to the number thirteen. LWa

c. 3000 BCE

Mathematics

Ancient Egypt

A symbolic representation of abstract numerical ideas

The Rhind Mathematical Papyrus (c. 1650 BCE) is one of the oldest mathematical texts in the world.

The ability of humanity to formulate and use mathematical concepts probably predates historical records, as questions about measurements, size, and quantities have always been of practical concern, as indeed has been the ability to count. Prehistoric artifacts dating as far back as 30,000 BCE, while not in themselves evidence of grasped mathematical concepts, are some indication that people were making marks or tallies in attempts to count or quantify.

Hieroglyphics dating from about 3000 BCE are evidence that ancient Egyptians were already using numerals, while ancient Egyptian papyruses dating from around 2000 BCE represent some of the earliest known mathematical texts in existence. Nothing survives to document how much the ancient Egyptians actually understood of mathematical concepts, but early Indian civilizations appear to have used geometric patterns as early as 2600 BCE. Chinese mathematics seems to have arisen independently, with the oldest Chinese mathematical text originating around 300 BCE.

“Mathematics, rightly viewed, possesses not only truth, but supreme beauty.”

Bertrand Russell, philosopher

Mathematics is both vital to human progress and impractical at the same time. Every mathematical concept is “only” an abstraction, but as an abstract form of concept, mathematics allows for answers and deductions that are not constrained by the realities of the natural world. Mathematics, therefore, is a discipline, science, and language that creates a bridge between the world of thought or concepts and the everyday reality of human existence. It is a bridge that we traverse constantly to calculate, measure, and solve many of the other problems we face. MT

c. 3000 BCE

Egyptian Funerary Art

Ancient Egypt

The preservation and honoring of those passing into the afterlife

The Egyptians mummified bodies by embalming them to dry them out and then wrapping them in linen strips.

Egyptian funerary art was motivated by the central religious and cultural belief that life continued after death, which was a feature of ancient Egyptian society from about 3000 BCE. Practices such as mummification, the creation of sarcophagi, and the building of pyramids and tombs were intended to honor and preserve the body of the deceased in order to ease their transition to the afterlife. In addition, a number of carefully selected objects were often buried with the deceased person, comprising either personal possessions or more valuable items depending on their wealth and status during life.

The discovery and investigation of Egyptian funerary art have proven invaluable to archaeologists attempting to piece together the social order and structure of ancient Egyptian civilization. Procedures such as the delicate preservation of the deceased’s internal organs in Canopic jars during the process of mummification serve to demonstrate the elaborate and complex nature of the belief systems in place as well as the power and significance of the belief in an afterlife that was held throughout Egyptian society.

“For an ancient Egyptian nobleman … a fine burial … was his greatest aspiration.”

Abeer el-Shahawy, The Funerary Art of Ancient Egypt (2005)

The best-known surviving examples of Egyptian funerary art are undoubtedly the Great Pyramids found at the Giza Plateau on the outskirts of Cairo in modern Egypt. Now a UNESCO World Heritage site, the Pyramids are today more significant as a popular tourist destination than as the focus of a religious or spiritual belief system. However, they continue to represent a firmly held belief in the afterlife that plays a central role in the lives of many religious believers today. LWa

c. 3000 BCE

Autopsy

Ancient Egypt

Careful examination of a cadaver to find out the cause of death

The word “autopsy” comes from the Greek autopsia (to see with one’s own eyes), and refers to the postmortem examination of a body, usually by a physician of some kind, to discover the cause of death. The ancient Egyptians invented the autopsy procedure around 3000 BCE, although their focus was on preparing the body for immortality rather than ascertaining what occurred in the body’s final moments. Much later, ancient Greek physicians, particularly Erasistratus and Herophilus in the third century BCE, became the main developers of the technique of cutting open a cadaver for the purpose of advancing knowledge of anatomy. Five hundred years after that, the Roman physician Galen (129–c. 200/c. 216 CE) used personal observation of the interior of the human body to link a person’s symptoms with abnormalities found during the autopsy after their death. His observations marked the beginning of reliable, scientific medical diagnosis.

“The autopsy has a long and at times ignoble history …”

The Hospital Autopsy (2010), ed. by J. L. Burton & G. Rutty

In the early nineteenth century, an expansion of the medical profession caused a shortage in the legal supply of bodies for the purpose of dissection by students in pursuit of their studies. Corpses were obtained by robbing graves, buying in bodies from workhouse infirmaries, or even by resorting to murder. The best-known case is that of Burke and Hare, who murdered up to thirty people in 1828 in order to sell their bodies to the medical schools of Edinburgh. Modern-day autopsies bring scientific rigor to detective work in homicide cases and are important as a safeguard against the illegal taking of life by corrupt medical practitioners, care workers, and others. JF

c. 2800 BCE

Soap

Babylonia

The creation of a substance from the salt of a fatty acid that, when dissolved in water, possesses the ability to remove dirt from a variety of surfaces

Early soaps were used primarily in the creation of textile products such as wool, the manufacture of which is depicted here in a first-century CE fresco from Pompeii.

No one can be sure when soap was first made, but the earliest evidence is from 2800 BCE: a lining of soaplike material was discovered to be deposited in clay cylinders by archaeologists on an excavation of ancient Babylon. Other digs at Babylonian sites known to be 600 years younger have yielded the earliest known record of how to make soap. Clay tablets inscribed with cuneiform reveal formulas that mix water, cassia oil, and ashes, but do not explain how the soap was used.

“If I rub my hands with it, soap foams, exults … The more complaisant it makes them, supple, smooth, docile, the more it slobbers, the more its rage becomes voluminous, pearly …”

Francis Ponge, Soap (1967)

The Ebers Papyrus, written in Egypt in c. 1550 BCE, not only describes how to combine animal fats and vegetable oils with alkaline salts to create soapy substances, but also explains how this could then be used for washing. All soaps have essentially the same chemical structure: a hydrophilic (water-loving) head on a hydrophobic (water-fearing) hydrocarbon tail. This arrangement enables soap to perform the deceptively simple act of cleaning. First, it interacts with water molecules, reducing surface tension, so they spread better. Then, while its hydrophilic heads attract these molecules, its hydrophobic tails (which are also lipophilic, or fat-loving) embed into grease. When enough tails have lodged into a dirt particle, it is lifted into the water; the grease is effectively washed away.

The first use of soap was for cleaning wool and cotton used in the manufacture of textiles, and it was also used as a medical treatment for skin problems. It was not until the second century CE that soap was referred to in relation to personal cleanliness.

As to the question of how this cherished household helper got its name, Roman legend tells the story of Mount Sapo, a mountain where animals were ritually sacrificed. When the rains washed the animal fats and wood-fire ashes down to the river Tiber, it takes no great leap of imagination to guess what formed when they mixed, sapo being Latin for “soap.” JH

c. 2600 BCE

Literature

Sumeria

Communicating ideas, beliefs, and experiences through the written word

What differentiates literature from other forms of writing is not always clear. While technical, descriptive, and scientific works are sometimes included in the broader definition of literature, the term more commonly encompasses only creative, expressive, and narrative works. Through literature, then, a writer uses words to convey ideas, feelings, experiences, and other commonly shared human phenomena, instead of simply relating facts.

Literature could not have existed without the invention of writing, yet writing in itself does not necessarily constitute literature. Writing had existed as early as 3000 BCE in the Bronze Age cultures of ancient Mesopotamia, yet literature did not appear until 400 years later in ancient Sumeria. Prior to the invention of writing, literature only existed in an oral form, passed down between generations as stories and myths. One of the earliest and best known of these stories is the Epic of Gilgamesh—a description in the Akkadian language of the odyssey of Gilgamesh, the king of the Mesopotamian city-state Uruk—that was first recorded in writing in around 2000 BCE. Various forms of literature arose independently across ancient civilizations and changed over time, coming to include everything from poetry to drama, narrative fiction, and graphic novels.

Literature exists as anything on a spectrum, from escapist enjoyment or tedious study to a bond that develops between those who read a specific work and share a common identity or purpose. A literary work can express the fears, desires, or feelings of an entire nation, and can hold historical and cultural significance. Literature enables ideas to be transmitted across geographic and temporal boundaries, serving as a link not only between cultures, but also between the past and the present, between generations, civilizations, and peoples. MT

c. 2500 BCE

Abacus

Sumeria

The oldest ancestor of the modern calculator and computer

First developed by the Sumerians in ancient Mesopotamia in 2500 BCE, an abacus is a device used for counting and making arithmetical calculations. The etymology of the term “abacus”—which comes from Semitic languages in which the noun abaq means “dust”—has given rise to the theory that the original abacuses were flat boards covered with sand, on which numbers could be written and then erased.

From its ancient origins, the abacus eventually spread to Greece: a marble tablet, 5 feet (150 cm) long and 2 feet 6 inches (75 cm) wide, from the Greek island of Salamis, made in around 300 BCE, is the oldest counting board that has so far been discovered. The abacus then reached Rome and China. Over time, it developed from a shallow sandbox into its now familiar form: a frame across which are stretched several wires, each with a number of beads that can be slid from end to end of each wire. Each row of beads represents a different value: one row is typically units, another tens, another hundreds.

“Learning how to use the abacus can help to improve concentration [and] memory.”

Paul Green, How to Use a Chinese Abacus (2007)

In around 700 CE, the Hindus developed an innovative numeral system with place values and zeroes that made counting, addition, subtraction, multiplication, and division easier than ever before to carry out in writing. This new idea caught on with the Arabs, who introduced it into Europe in around 1000 CE. From then on, abacuses were used less frequently in the West, although even today they remain common sights in China, Japan, and parts of Western Asia, where the best operators can keep pace with people using pocket calculators. GL

c. 2300 BCE

Map

Babylonia

A graphical representation of a geographic area

A Babylonian cuneiform tablet from c. 700–500 BCE, containing a map of Mesopotamia. In the center is Babylon, surrounded by Assyria and Elam.

The oldest possible evidence of humanity creating visual depictions of spatial relationships comes from the Lascaux caves in southwestern France, where prehistoric man may have painted images of the stars in the night sky. However, it was not until about 2300 BCE, in ancient Babylon, that clear evidence of what we would easily recognize as a map emerged. Inscribed on clay tablets, Babylonian maps show natural terrain features and cities, as well as cuneiform labels for locations and even directions.

“Journey over all the universe in a map, without the expense and fatigue of traveling, without suffering the inconveniences of heat, cold, hunger, and thirst.”

Miguel de Cervantes, Don Quixote (1605)

In the first millennium BCE, cartography—the art and practice of mapmaking—advanced considerably in ancient Greece and Rome. The Chinese developed maps as early as the fourth century BCE, and ever since their introduction, humanity has improved on them.

A map is a visual representation of the spatial relationships between features or factors. Usually drawn to scale, maps can represent land features, bodies of water, political boundaries, populations, elevations, cultural differences, and a host of other kinds of information. Maps are often made from the perspective of someone looking down, which allows for two-dimensional representations of three-dimensional spaces. They are often made on flat surfaces such as pieces of walls or paper, though three-dimensional maps, such as globes, are also common, as are flat projections of non-flat areas, such as maps of the world.

Maps are images, and as images they transcend written and verbal language. They are a form of communication that shares information across different cultures and regions, conveying knowledge instantaneously through the medium of symbols. Even if a map is grossly unrealistic and in no way looks or feels like the features it purports to represent, it allows humans to visualize what they might otherwise never be in a position to see in person. MT

c. 2300 BCE

Dictionary

Mesopotamia

A book that collects together in a standard form all the words of a language

Samuel Johnson’s publication of a dictionary in 1755 earned him a degree from the University of Oxford. Johnson illustrated word usage in his dictionary with numerous literary quotations.

The world’s oldest dictionary dates back to about 2300 BCE. The bilingual cuneiform tablet was written during the reign of Sargon of Akkad (c. 2334–c. 2279 BCE), who unified the Sumerian city-states and created the Akkadian Empire of Mesopotamia, and contains Sumerian words and their Akkadian counterparts.

“Take care that you never spell a word wrong. Always before you write a word, consider how it is spelled, and, if you do not remember, turn to a dictionary.”

Thomas Jefferson, U.S. president 1801–09

In about 300 BCE, the Chinese developed the first dictionary that organized words of the same language, grouping them as synonyms in nineteen different categories. Multilingual dictionaries and glossaries of specialized terms were common in Europe during the Middle Ages (c. 500–c. 1450), but it was not until Samuel Johnson (1709–84) created A Dictionary of the English Language in 1755 that the first modern dictionary made its appearance. Johnson’s dictionary, completed after nine years, attempted to encompass all the words of the English language, not merely the obscure ones.

Today there are numerous types of dictionaries that contain explanations of the words of individual languages, and a host of multilingual and specialized dictionaries on almost any topic. Some dictionaries list terms relevant to a particular subject matter or field of study—legal or medical dictionaries are examples—while others contain words from one or more languages with translations.

The first dictionaries were of little importance in societies where oral communication was the norm. Complete dictionaries were needed only after the invention of writing; only became practical after the invention of the printing press; and only became truly necessary after literacy became commonplace. The dictionary codified language and provided a measuring stick that the literate could apply to their own tongue. Referred to by everyone, the dictionary conferred a uniformity in how the written word was used, enabling free and accurate communication—without any spelling mistakes. MT

c. 2280 BCE

Consolation of Philosophy

Ancient Egypt

Determined and careful reasoning can in itself bring comfort to the soul

Humans have utilized their rationality for many things, including philosophical speculation or, broadly speaking, the activity of thinking hard about a problem. In particular, humans have thought hard about suffering and death. Although almost certainly not the first to have thought about these problems, the Egyptians, in A Dispute over Suicide (c. 2280 BCE), are among the first to have recorded their speculation. The Mesopotamians, too, wrote semi-philosophical treatises, such as the Babylonian Theodicy (c. 1000 BCE).

The best-known instances of such philosophical works are The Apology (399 BCE) and Phaedo (360 BCE) by Plato (c. 424–c. 348 BCE), but it was the Christian Platonist Boethius (c. 480–c. 524 CE) who coined the phrase “the consolation of philosophy.” In his book of the same name, written in 524, Boethius finds himself unjustly imprisoned, awaiting execution. Stripped of all his former glory, power, and influence, he is overcome with despair until Lady Philosophy arrives. Unlike Lady Wisdom in the Book of Proverbs, who simply tells her pupil what’s what, Boethius’s Lady Philosophy encourages him to think carefully about why he is suffering and why, ultimately, it does not matter.

“I … decided to write down my wretched complaint …”

Boethius, The Consolation of Philosophy (524 CE)

The consolation Lady Philosophy brings is not occasioned, as secular humanists sometimes imagine it, by a rejection of religion; rather, Boethius chooses between finding no comfort from anything, finding comfort through non-speculation, and finding comfort through philosophical reasoning. The third option implies being able to draw from evidence wherever it might be, including from religious sources. AB

c. 2150 BCE

Poetry

Sumeria

A literary art form that uses words and their sounds to convey meaning vividly

A tablet containing a fragment of the Epic of Gilgamesh (c. 2150 BCE), the oldest known written poem.

Humanity was able to use language long before the invention of the written word, and poetry almost certainly existed before writing. When early cultures wanted to relate stories or oral histories, they may have used poetic forms and styles, such as rhyme and meter, to make it easier for listeners to remember what they heard. In a more modern sense, poetry uses words to create works of art. Distinct from prose or literature, poetry relies on the qualities of the words as written, formatted, or structured to not only convey meaning but also to do so in a beautiful or stylistic way.

The first known written poem, the Epic of Gilgamesh, came from ancient Sumeria sometime around 2150 BCE. Poetry arose across cultural boundaries, with notable early works coming from Indian, Chinese, and Greek societies. Over the millennia, poets have incorporated a range of literary devices, such as alliteration and rhyme, as well as written poetry in an array of formats and arrangements. The more recent development of free verse or free-form poetry has given poets the opportunity to create without the formal, structural boundaries of the past.

“Poetry is finer and more philosophical than history …”

Aristotle, Greek philosopher and scientist

Humanity created words to describe the natural world, yet these words are not merely descriptors—simply hearing them can create an emotional reaction. Poetry captures the impact that words have on us. Whether structured, rhymed, or free form, poetry seizes the images and emotions that occur in our minds when we hear or read words, and shapes those words into forms that impart more meaning than they might otherwise suggest. MT

c. 2150 BCE

Flood Myth

Mesopotamia

An ancient deluge sent as punishment against humans

An Ottoman miniature from the thirteenth century, depicting Noah’s Ark.

Most ancient civilizations had a flood myth recounting, with remarkable similarity, how God or the gods sent, as punishment, a massive flood to destroy most humans. A Sumerian account from c. 2150 BCE is the oldest written record of a flood myth, though the biblical account is the most detailed. In all versions of the myth, the divine becomes angered by the actions of the newly created humans. A flood is then sent to destroy all the humans that have earned divine displeasure, but in every case there is one outstanding man who is warned of the coming doom and told to build a boat in order to survive the ordeal. In most cases, the man brings his family on board, along with a pair of every kind of animal that would be in the flood’s destructive path. Once divine wrath has ceased and the flood waters recede, the humans give thanks to the divine and then repopulate the destroyed area.

“We sent the Flood, but a man survived the catastrophe …”

Atrahasis III.8 (Mesopotamian flood myth tablet)

Although some have understood this myth to be the story of an event that affected the whole world, this is not necessarily the best reading. Descriptive phrases such as “and the waters covered all the mountaintops” are probably just hyperbole, suggesting that while the global similarity of the myths makes an original historical event of some kind quite likely, it would probably be best to view it, both textually and scientifically, as a local—perhaps Near Eastern—flood.

Whatever the case, the flood myth has consistently inspired belief in some higher form or another. The flood’s consequent rainbow provides a powerful symbol of both God’s justice (in punishing evildoers) and love (in sparing the repentant). AB

c. 2150 BCE

Catastrophism

India

The theory that the Earth has been affected in the past by sudden violent events

Catastrophism is the theory that the Earth’s geomorphological features originated with a series of great catastrophes that have occurred throughout history. These catastrophes were originally held to be so great in scale that ordinary processes on Earth could not be responsible, and supernatural forces had to be the cause. Such a theory was prevalent in many early mythologies in their accounts of astounding floods, such as those found in the Epic of Gilgamesh (c. 2150 BCE), the Book of Genesis, and in Plato’s (c. 424—c. 348 BCE) accounts of the Ogygian flood in his dialogues Timaeus, Critias, and Laws (all written c. 360 BCE).

“The hearts of the Great Gods moved them to inflict the Flood …”

Epic of Gilgamesh (c. 2150 BCE)

By the beginning of the nineteenth century, the leading scientific proponent of catastrophism was the French anatomist and paleontologist Georges Cuvier (1769–1832). The kind of catastrophism that he posited was later combined with uniformitarianism (the belief that the Earth’s changes occurred gradually over a long period of time), initially via the work of Walter Alvarez (b. 1940), a dinosaur paleontologist. The modern synthesis of the two schools of thought recognizes both long processes of geologic change and the occurrence in the Earth’s history of massive, era-defining changes (notably, meteor strikes), which no longer need to be explained with reference to supernatural intervention. Early adherents of catastrophism naturally turned to God and superstition to explain phenomena about which they had no information, but the idea was an influential element of the modern understanding of geologic change and processes. JE

c. 2150 BCE

Immortality

Sumeria

The idea that a being can live forever in body or spirit

A Sumerian tablet (c. 2500 BCE) showing the tree of life, the fruit of which was said to endow longevity.

There are two basic types of immortality: immortality of the spirit and physical immortality. Immortality of the spirit is the idea that a person, or animal, possesses a soul or a supernatural component that, even after the body dies, goes on to live forever. Physical immortality is the idea that the material body itself is immune to death, or is otherwise unable to die.

Throughout the history of humanity, nearly all people have had some experience with death, either of a person or another creature. That all life is mortal is readily apparent to any observer, yet at the same time humanity has developed the idea of immortality, the notion that existence does not end. Anthropologists have identified a belief in some type of immortality as being present from the earliest known cultures. In the ancient Sumerian Epic of Gilgamesh (c. 2150 BCE), widely regarded as one of the first written narrative tales, Gilgamesh, the king of Uruk, embarks on a quest for immortality. Some religious traditions hold that spiritual immortality is closely linked with the actions people take in their lifetimes; in other traditions, there is no direct relationship between a person’s ethical and moral activity and whether that person “earns” the continuance of an immortal spirit or eternal life.

Is death the end? Does our consciousness survive our mortality, or will our spirit continue into eternity? The belief in immortality provides answers to these basic human questions, even if the belief is inaccurate. For believers, the prospect of eternal life, or eternal damnation, often serves as a motivator, providing a reason for engaging in ethical or moral behavior. However, as modern science advances, this consideration may one day become irrelevant—methods for halting the ageing process are becoming increasingly sophisticated, and some scientists theorize that the power of technology could eventually enable the human body to live forever. MT

c. 2100 BCE

Inheritance

Sumeria

The transference of a deceased person’s property to a new owner

An Assyrian tablet from c. 1300 BCE, engraved with legal text that relates to inheritance.

In a society in which a person can own property, there is always the inevitable question of what happens to that property after the owner dies. The idea of inheritance answers this question, establishing exactly who becomes the new owner of the property formerly owned by the deceased.

There have been a number of inheritance systems throughout history, such as a parent’s property passing entirely to the eldest male child, entirely to the youngest child, equally to all male children, equally to all children, disproportionately split between children of different ages, only from fathers to sons, and only from mothers to daughters, among others. Inheritance can encompass more than just property, and may also include rights or obligations, such as the right to become king, or the obligation to recompense an unpaid a debt.

Perhaps the oldest known reference to inheritance comes from ancient Sumeria. The Sumerian Code of Ur-Nammu (c. 2100–2050 BCE), the oldest known legal code, contained several individual inheritance laws. The later Babylonian Code of Hammurabi, which appearanced in about 1772 BCE, contained well-established inheritance laws that addressed issues such as how to divide the deceased person’s property, the rights a wife had to distribute property that was given to her, and when a son could be disinherited.

In one respect, inheritance systems are very pragmatic, answering the question of ownership when an owner dies. However, they can also have a significant impact on family and social relationships. Through inheritance, children know that they will become the owners of wealth, ascend to the throne, or have to answer for parental mistakes. As one generation inherits biological traits from their parents, so too do they inherit social standing, property, and even stigma. MT

c. 2000 BCE

Oracle

China

The power of divinely inspired prophecy and prediction

Oracles in the ancient world were typically religious figures believed to possess prophetic abilities inspired by the gods. The word “oracle” comes from the Latin verb ōrāre, meaning “to speak,” and indicates the oracle’s role in delivering messages from the divine. Oracles are known to have advised some of the most powerful figures in human history, and as such were themselves both powerful and influential individuals.

Belief in the power of oracular prophecy was a feature of many ancient cultures, including those of China and India where the presence of oracles can be traced back to the second millennium BCE. In European history, some of the earliest and best-documented accounts of oracles derive from Greek antiquity. Perhaps the best-known was the oracle at Delphi, who became famous in the seventh and sixth centuries BCE for communicating with the Greek god Apollo. The role was customarily filled by a priestess, who would advise politicians, philosophers, and kings on issues of war, duty, and the law. The fact that men of power in the male-dominated Greek society took advice on these issues from a woman demonstrates the high level of faith that was invested in oracular prophecy at the time.

“The ancient oracle said that I was the wisest of all the Greeks.”

Socrates, quoted in The Apology by Plato (399 BCE)

In the modern world, oracles are no longer a common feature of daily life. However, the idea of prophecy is still drawn on in a variety of contemporary contexts. One of the most enduring examples of oracular prophecy can be found in the continued use of the I Ching, an ancient Chinese system of divination that is still in use as a means of predicting future events in both Eastern and Western cultures. LWa

c. 2000 BCE

Astrology

Mesopotamia/China

Humanity’s search for deeper meaning in the skies

Chinese pottery figures from c. 396, representing the horse and tiger signs of the zodiac.

Astrology is a system of beliefs that appeals to the motions of the celestial bodies in order to explain features of human life or to predict future events. The ancient origins of astrology can be traced back to the Mesopotamians in the second millennium BCE and at a similar time to the peoples of ancient China. The Chinese astrological system is distinctive for its use of animals to symbolize the twelve years of the zodiac, beginning with the year of the Rat and ending with the year of the Pig. Each animal is associated with a set of personality traits ascribed to those born in a given year.

In contrast to this, the astrological systems that developed in ancient India and South Asia were focused more directly on predicting a person’s destiny or fate. This alternative focus also appeared in Western astrology and persists in its modern manifestation, in which it is believed that events in a person’s life can be predicted and explained by the motions of the stars and planets. This form of astrology uses the signs of the tropical zodiac and is well known in the Western world through the medium of horoscopes.

“We are merely the stars’ tennis balls, struck and bandied which way please them.”

John Webster, The Duchess of Malfi (1613)

Although this Western form of astrology has been largely discredited as pseudoscientific in the modern world, its influence as an idea is difficult to overestimate. With a significant number of people still regularly consulting horoscopes, it cannot be denied that astrology plays an active, if perhaps minor, role in the day-to-day lives of many. Likewise, elements of ancient astrology can be identified in contemporary Indian, Japanese, and Chinese cultures, within both personal and political spheres. LWa

c. 2000 BCE

Colonialism

Ancient Egypt and Phoenicia

The process of taking control of the land, resources, and people of a nation by military force

An illustration of a French colonial expedition in Madagascar, 1895. Madagascar was declared a French colony in 1896, and remained one until 1960.

Colonialism is the idea behind a process of political, economic, and often cultural domination by a more powerful nation over a weaker one, usually obtained through military force. The earliest examples of this process can be found among the ancient African empires of Egypt and Phoenicia in the second millennium BCE, which used colonialism as a means of securing trade routes, and later with the expansion of the Greek and Roman empires in the first to fifth centuries CE. The latter of these encompassed a significant portion of modern Europe at its height in the second century CE, and can be credited with introducing numerous technological innovations, such as central heating and improved sanitation, to colonized areas. As such, Roman colonialism had a lasting impact on the development of Europe.

“The worst thing that colonialism did was to cloud our view of our past …”

Barack Obama, Dreams from My Father (1995)

In more recent history, the European colonial period, beginning in the sixteenth century and lasting for approximately 500 years, gave rise to a number of large-scale empires, including those of Spain, Portugal, Britain, and France. Primarily motivated by the prospects of economic gain in the countries of Asia, Africa, and the Americas, the European colonists took control of indigenous resources and trade routes by means of military force. Practices of brutal domination during this period were often justified in terms of “civilizing” the “uncivilized” native populations.

Colonialism in this period was closely connected to the crusading mission of Christian countries, which sought to supplant indigenous religious practices with Christianity. The impact of this religious colonialism can still be seen in the modern world, with many now-independent nations maintaining Christianity as their state religion. Thus, despite the cessation of widespread colonialism in today’s geopolitical landscape, the social and cultural impacts of historical colonization continue to shape the lives of many. LWa

c. 2000 BCE

Incarnation

Ancient Egypt

The physical embodiment of the divine in animal or human form

An Egyptian stele dedicated to the bull deity Apis, from c. 1000 BCE. Apis, the living bull, was worshipped as the earthly incarnation of the god Ptah.

Incarnation describes the birth or manifestation of a divine being in sentient form, either as human or animal. The origins of this idea can be traced as far back as ancient Egypt during the second millennium BCE, when the ruling pharaohs were believed to be incarnations of the Egyptian gods Horus and Ra. However, the idea was adopted by many of the world’s major religions, and it is arguably still one of the most contentious aspects of religious belief today.

“And the Word became flesh and dwelt among us, and we have seen his glory, glory as of the only son from the Father.”

The Bible, John 1:14

One of the most widely known and believed incarnations in religious history is that of Jesus Christ, who in the Christian doctrine is said to be the son of God. The unity of man and God in this incarnation is of central importance to Christian believers, as it represents the presence of the divine in an otherwise human world. However, both Islam and Judaism categorically reject the idea of Jesus as the incarnation of any form of the divine. Mainstream Islamic believers instead regard Jesus as a prophet, or messenger of God, alongside figures such as Adam, Noah, Abraham, Moses, and Muhammad. Judaism, on the other hand, denies that Jesus was any form of prophet at all. This belief marked an important difference between Jewish and Christian believers after the death of Christ in the first century CE.

In the modern world, the idea of incarnation is still one of great significance for many religious believers. It plays a central role in Buddhist belief systems, where it is considered a feature of the continued cycle of birth and rebirth, which can only be broken by the attainment of enlightenment. Thus, for Buddhists, incarnation is an indication of failure of enlightenment. Moreover, the idea of incarnation continues to divide the major monotheistic religions, and as such it can be viewed as one of the most fundamentally divisive and profound ideas in human history. LWa

c. 2000 BCE

Loan

Babylonia

Providing goods, services, or money to a borrower in exchange for future repayment

It is not clear when the first loan occurred, but there is evidence to show that by 2000 BCE, ancient Babylonians were using a system of lending in which temples loaned farmers seeds at the beginning of the planting season. The farmers would then take the seeds, plant the crops, and repay the loan after selling the harvested product. Ancient Greeks and Romans made wider use of loans from the fourth century BCE, with the use of interest charges becoming commonplace. In modern times, and especially since the twentieth century, loans have permeated world economies.

Traditionally, moneylending has generally been viewed negatively as a practice, and moneylenders often appear as villains in literature—such as Shylock in William Shakespeare’s The Merchant of Venice (c. 1596). The practice of charging interest on a loan has been condemned at various times throughout history by numerous religious traditions, including Hinduism, Buddhism, Judaism, Christianity, and Islam.

“Neither a borrower nor a lender be, For loan oft loses both itself and friend …”

William Shakespeare, Hamlet (1603)

Today, loans are ubiquitous around the world. People borrow from lenders, banks borrow from other banks, and governments even borrow from themselves. Loans offer the ability to buy something over time, something that would otherwise be out of the buyer’s reach. Without loans, people can purchase only what they can afford with the money they currently possess. The promise of later repayment has built wealth, has allowed consumers to purchase homes and create businesses, has given nations the ability to pay for wars, and has led to countless instances of financial ruin. MT

c. 1772 BCE

Polygamy

Babylonia

The practice or custom of having more than one wife or husband at the same time

An eighteenth-century miniature painting depicting the Mughal emperor Jahangir with his harem.

The term “polygamy” includes marriages between a single male and multiple females (polygyny); marriages between a single woman and multiple men (polyandry); and marriages between multiple members of both sexes (polyamory). In biological terms, the propensity for male mammals to have more than one mate at the same time is correlated to the relative size of males and females; as a result, prehistoric humans are generally thought to have been polygamous. The first evidence of polygamy as a social practice, however, perhaps occurs in the Code of Hammurabi, a Babylonian legal text dating to about 1772 BCE. The code stated that unless a man’s first wife was infertile or ill, or had engaged in marital misconduct, it remained up to her to allow her husband to marry a second woman.

“A thought is come into my head … to marry another wife …”

Bernardino Ochino, A Dialog on Polygamy (1563)

According to at least one study, more than 90 percent of identified cultures—including both contemporary and historical societies—have recognized at least one form of socially accepted polygamy. Polygyny has tended to be the most prevalent. There are several advantages to polygamy as a form of matrimony: socially, it can be seen as a mark of status, especially when wealthy or powerful males have multiple spouses; economically, it can function as a means of producing readily available family labor and, as a result, more wealth. For those in a polygynous marriage, it can also have a positive effect on maternal and child health. However, while marriage is common throughout the modern world, polygamy is not—even in countries where it is accepted socially or legally. MT

c. 1772 BCE

Code of Hammurabi

Hammurabi

The idea that subject peoples of many creeds may be united by common laws

A Babylonian stele (c. 1750 BCE) inscribed with the Code of Hammurabi.

Although the Code of Ur-Nammu (2100–2050 BCE) is the oldest surviving written law code, the Code of Hammurabi, drawn up in c. 1772 BCE, is arguably the most complete and best known. Hammurabi (r. 1792–1750 BCE) was the sixth king of Babylon, ruling over a vast territory inhabited by many different peoples. The Code of Hammurabi was a set of transcultural laws designed to establish his authority throughout his kingdom. It comprised 282 individual laws addressing matters of justice in respect to religion, trade, slavery, marriage, and so on.

In the preface to the Code, Hammurabi cites many Mesopotamian gods, most importantly Shamash, the sun god, evoking the ancient idea that justice is connected with divine illumination. Indeed, there are parallels between the Code of Hammurabi’s law, which states: “If a citizen has destroyed the eye of one citizen, they shall destroy his eye,” and, for example, the Hebrew Bible’s Mosaic Law of “an eye for an eye.”

“If a citizen has destroyed the eye of one citizen, they shall destroy his eye.”

The Code of Hammurabi 196

The importance of the Code of Hammurabi lies, first, in its mere existence: before Hammurabi, no one had tried to use written laws to unify so many different peoples; second, its impact lies in some of the laws it forwarded. For example, accused persons—usually women—could expect to be cast into a river, either as a trial of their innocence or as a test of their accuser’s forgiveness. It seems that procedures like these found their way into the barbaric “trials by water” of women accused of witchcraft in sixteenth and seventeenth-century Europe, which saw “innocent” women swim and “guilty” women sink. AB

c. 1772 BCE

Human Adoption

Hammurabi

The assumption of responsibility for another person, usually a child

The practice of adoption (taking on the role of parent to anyone who is not one’s own offspring) dates from antiquity, but the Code of Hammurabi (c. 1772 BCE), created by Hammurabi (r. 1792–1750 BCE), sixth king of Babylon, was probably the first body of laws to spell out in detail the responsibilities of adopters and the rights of those adopted. It is most usual that an adult adopts a child, but this is not invariably the case. In the course of history, men of almost any age have been adopted in order to preserve a male line of inheritance for political, religious, and economic reasons.

Adoption as it is now widely understood—the commitment of an adult to rear and nurture another person’s child—was first enshrined in law in 1851 by the U.S. state of Massachusetts. In Britain, legislation to permit the practice was passed in 1926, largely to ensure care for children who had been orphaned in World War I, but partly also to cope with an increase in the number of illegitimate births.

“Adoption is a redemptive response to tragedy … in this broken world.”

Katie J. Davis, Christian and author

In the aftermath of World War II, international adoptions became more common as children of one country were brought up by adults of other, usually more prosperous, nations. Slower to gain acceptance was interracial adoption, which was highly opposed in the United States and elsewhere.

After 1970, the stigma of giving birth out of wedlock was greatly reduced, with the result that fewer children in the adopting countries became available for adoption. The rules governing eligibility to adopt have also changed and now include single parents and, in some countries, same-sex couples. GL

c. 1600 BCE

Toothbrush

Africa

The use of a specially designed implement to keep the teeth disease free

A Masai woman in Kenya uses a stick as a toothbrush. Chewing sticks have been used by humans to keep teeth clean for thousands of years.

The idea that the good health of the mouth, and especially that of the teeth, might be maintained by a scrupulous hygienic routine is not nearly as new as many people would imagine. The task of removing food particles from the teeth has long been familiar to humanity; even the Neanderthals picked their teeth, as evidenced by distinctive markings on the teeth of some Neanderthal skulls. Evidence from prehistoric sites also suggests that early man used tools such as wooden chewing sticks, twigs from specific tree species, and even feathers and splinters of bone to remove food from their teeth, although it is unknown whether they suspected that doing so would prevent decay. Today, certain types of chewing sticks, such as mefaka in Ethiopia, continue to be recommended, especially for children, as being just as good as the toothbrush for maintaining good dental hygiene.

“Every tooth in a man’s head is more valuable than a diamond.”

Miguel de Cervantes, Don Quixote (1605)

The earliest known toothbrush has been dated to a site in Africa from around 1600 BCE. By the end of the second millennium BCE, Mesopotamian cultures had a wide range of dental hygiene practices. A number of prescriptions for mouthwashes have been discovered by archaeologists, in addition to directions for using an index finger wrapped in cloths to brush and clean the teeth. A modern variant of this idea is the finger toothbrush, fitted over a forefinger and most often used by parents to clean the teeth of young children.

Many people today have integrated modern dental hygiene techniques into their daily routine, and this, when combined with regular professional care, has greatly reduced the incidence of severe oral diseases in society. Regular dental hygiene is often a problem in developing nations and in poor populations, but improvements in hygienic practices immediately lead to significant improvements in dental health. Regular use of a toothbrush serves as a daily reminder that good hygiene helps maintain optimal health. MT

c. 1600 BCE

Mystical Experience

Ancient Greece

Personal experience of a transcendent reality or state of consciousness

A sadhu (Hindu holy man) in northern India. Sadhus are solely dedicated to achieving the transcendant state of moksha (liberation) through meditation and contemplation.

A mystical experience is one in which an individual is aware of transcendent truth at a depth, or in a dimension, that is not experienced in typical consciousness. (The experience is sometimes referred to as a state of altered consciousness.) Such experiences have their origin in mystery religion; the Eleusinian Mysteries, annual initiation ceremonies in the cults of the goddesses Demeter and Persephone, were held out of sight of the authorities from about 1600 BCE at Eleusis, near Athens in ancient Greece. Far from dying out, the mystical ceremonies persisted for 2,000 years, through the era of Hellenic Greece and to the end of the Roman Empire in 476.

“I could not any more have doubted that HE was there than that I was.”

William James, The Varieties of Religious Experience (1902)

In the nineteenth century, U.S. psychologist and philosopher William James (1842–1910) defined a mystical experience as always having four key features: transience, indescribability, instructiveness, and passivity. The mystical experience is unusual in that it takes the subject away from normal perceptions for a limited period of time. When that person returns from the event, he or she has difficulty explaining what has happened, since transcendent reality does not lend itself to language. Despite this difficulty, the experience educates or fundamentally changes the experiencer in some way. A mystical experience cannot be controlled by the subject, and operates in a way that is at least partially separate from the individual’s will.

All personal religious experiences are linked to mystical experiences. The implication of the fact that they occur is that there is an unsuspected, different reality lying beyond normal human experience and understanding. Mystical experiences indicate that human reality is not the only reality, and so they confront those who experience them or believe in them with the fundamental limits of human knowledge. Mystical experiences offer proof and firsthand knowledge of human transcendence. TD

c. 1550 BCE

Abortion

Ancient Egypt

The purposeful termination of a human pregnancy

A statue representing a child lost to miscarriage or abortion, at the Hase Dera Temple, Kamakura, Japan.

An abortion results in the termination of a pregnancy by causing the death of a fetus or embryo and/or its removal from a womb. Whereas a miscarriage results in the end of a pregnancy because of naturally occurring factors or from the unintended consequences of intentional actions, an abortion is an act performed specifically to terminate pregnancy.

It is not clear when the first abortion took place. Chinese lore holds that Emperor Shennong gave his concubines mercury to cause them to have abortions as early as about 3000 BCE. The ancient Babylonian Code of Hammurabi (c. 1792 BCE) contained prohibitions against causing a woman to miscarry by assaulting her. However, the first concrete evidence of purposeful abortions appears in about 1550 BCE with the Ebers Papyrus, an ancient Egyptian medical text that, among descriptions of other medical practices, includes a recipe for a potion that could stop a pregnancy at any stage. By ancient Greek and Roman times (c. 500 BCE), abortion had become fairly common, so much so that the plant most often used to induce abortions, silphium, is now believed to be extinct.

“I will not give to a woman a pessary to produce abortion.”

Hippocratic Oath

Modern medicine has enabled abortions to become more easily available, and also made them considerably safer, yet the moral and ethical implications of the procedure have, ever since its invention, spurred rigorous and widespread debate. Is it morally or ethically permissible to perform abortions? If so, when? And who decides? Such questions, and their answers, continue to vex individuals and societies across the globe. MT

c. 1550 BCE

Birth Control

Ancient Egypt

Controlling human fertility in order to prevent pregnancy

Birth control is the limiting of human reproduction by any method: total or periodic sexual abstinence, coitus interruptus, contraception, abortion, or sterilization. The practice is ancient, but the term was coined in 1914 by Margaret Sanger (1879–1966), a U.S. nurse, who founded the first U.S. clinic for this purpose and helped to establish the notion of planned parenthood.

The earliest description of birth control can be traced back to about 1550 BCE and the ancient Egyptian Ebers Papyrus, which explains how to mix dates, acacia, and honey into a paste, and smear it over wool for use as a pessary. Another birth control method, the condom, also dates from this period. Condoms were made from animal intestines until 1839, after which vulcanized rubber became their universal material. Other contraceptive methods that emerged in the nineteenth century include vaginal barriers, such as caps and diaphragms, and stem pessaries. Also introduced was sterilization: vasectomy for men and surgical occlusion of the fallopian tubes for women. From the 1960s, the oral contraceptive pill had a liberating effect on attitudes to sex, which became more detached than ever from concerns about pregnancy.

“Birth control is the first … step a woman must take toward the goal of her freedom.”

Margaret Sanger, U.S. social reformer

The reasons for birth control are primarily personal: the mother’s health; a combination of sexual desire and reluctance to make a commitment to either a partner or a potential child; concern about the economic consequences of a dependent. There are wider social issues, too: some predict that by 2100 the population of the world could reach 16 billion, a total likely to place a severe strain on resources. GL

c. 1500 BCE

Anesthesia

Middle East

A method of effectively reducing pain during surgery

The use of anesthetics dates back to around 1500 BCE, when opium poppies were harvested in the Middle East and eastern Mediterranean. The Islamic Empire has many accounts of both oral and inhalant anesthetics being used during surgical operations—mostly sponges soaked in various narcotic preparations and placed over the patient’s nose and mouth.

In 1800 the use of nitrous oxide was first recorded by the British chemist Humphry Davy (1778–1829), who observed that he had felt dizzy and even euphoric after inhaling it. It would be another forty years before it found broad clinical acceptance as an anesthetic, but the days when pain was considered an unavoidable ingredient to life, or God’s punishment to women in labor, or a way of purifying a wicked heart, were numbered.

“The state should, I think, be called ‘anesthesia.’ This signifies insensibility.”

William T. G. Morton, pioneer of the use of ether

Nitrous oxide may have been discovered by the British scientist Joseph Priestly in 1772, but it was Davy who first suggested it might be used to relieve the pain and shock of surgery if inhaled. In his now-famous paper, “Researches, Chemical and Philosophical, Chiefly Concerning Nitrous Oxide, or Dephlogisticated Nitrous Air and its Respiration” (1800), Davy wrote: “As nitrous oxide in its extensive operation appears capable of destroying physical pain, it may probably be used with advantage during surgical operations …” Davy may have become distracted in the years that followed byother work, including the development of the voltaic battery and the invention of a safety lamp for miners (the Davy lamp), but his search for an anesthetic for everyday surgical use was a watershed in the development of modern medicine. BS

c. 15oo BCE

Regicide

Babylonia

The killing of a monarch in order to transfer power to an alternative authority

An illuminated manuscript from the twelfth century, showing the massacre of a king and his servants.

The idea of regicide—the deliberate killing of a monarch—is revealed in a long series of unfortunate monarchs, pharaohs, and emperors killed at the hands of their people. From around 1500 BCE, in the annual festival of Sacaea in Babylon, a mock king—a convicted criminal who was allowed to reign for five days, and to enjoy the king’s harem—was installed, then tortured and killed. His death, symbolic of that of the king, was offered in sacrifice to the Sumerian god Tammuz.

Regicide has frequently been used as a means of transferring power to another authority. The history of the Roman Empire (27 BCE–476) features a number of cases, such as the murder of the Emperor Caligula in 41 CE by his personal bodyguards, the Praetorian Guard. At least fifty Roman emperors are known to have suffered a similar fate. Common use of the term “regicide” in the sense of “monarch-killer” began in the sixteenth century, when Pope Sixtus V described Queen Elizabeth I as such after the execution of Mary, Queen of Scots in 1587. Less than a century later, after the first English Civil War (1642–46), King Charles I became the subject of the best-known regicide in British history.

“I am afraid to think what I have done, Look on’t again I dare not.”

William Shakespeare, Macbeth (1606)

The idea of regicide is by no means confined to the annals of the past. Regicides of the twentieth century include the killing in 1934 of Alexander I of Yugoslavia by a member of a revolutionary organization, and the execution in 1958 of Faisal II of Iraq, ordered by Colonel Abdul Karim Qassim. The ideological conflicts represented by regicides such as these, and the plays for power that motivates them, are as significant in the modern world as they were in the past. LWa

c. 1500 BCE

Exorcism

Babylonia

A ritual to expel the forces of evil from the innocent and possessed

A thirteenth-century bas-relief from the font of Modena Cathedral, Italy, depicting an exorcism.

Exorcism is a religious practice that is believed to expel evil spirits, demons, or the devil from a possessed person, object, or place. Demonic possession is an ancient idea, with evidence of practices resembling exorcism occurring as early as the second millennium BCE among the Babylonian people of Mesopotamia. Since then, most of the world’s religions have developed exorcism rituals. Though these differ with respect to the methods employed, they are all based on the idea of ridding a possessed person or place of a metaphysical evil.

In the West, the idea of exorcism is most strongly associated with the Roman Catholic Church. Instructions for a Catholic exorcism are given in section thirteen of the Rituale Romanum, which details all the services that can be performed by a Catholic priest or deacon, and the ritual has been performed by Catholic exorcists around the world. This practice was particularly common during the fifteenth to eighteenth centuries and was often linked to accusations of witchcraft. Today the Catholic Church allows exorcisms to be performed only by an ordained priest with the permission of a local bishop. However, the Church maintains that instances of genuine exorcism are very rare. Those performing an exorcism are instructed to make a prior assessment of the apparently possessed in order to determine whether they in fact exhibit symptoms of mental disorders, such as schizophrenia, psychosis, or dissociative identity disorder.

From a nonreligious, scientific perspective, it is generally believed that all cases of demonic possession can be explained in terms of mental disorders such as those listed above. However, a rise of 50 percent in exorcisms performed during the 1960s and 1970s, exacerbated by the Hollywood movie The Exorcist (1973), demonstrates that the ancient beliefs behind the practice are still alive in the modern world. LWa

c. 1500 BCE

Zero

Babylonia

The symbol that transformed the understanding and practice of mathematics

A Dilmun clay tablet from c. 1450 BCE that was used to calculate pay for forced labor.

The symbol “0” is so widely recognized that it is hard to imagine a world in which there was no clear way of indicating “zero.” But that was the case until relatively recently in human history.

The use of a symbol to represent “nothingness” is an ancient idea, but the method of notation for it was not always satisfactory. In the second millennium BCE the Babylonians represented nothing with nothing: they merely left a space. This was open to misinterpretation, so they later took to using two slanted wedges to represent nought and also to sometimes denote placeholders. (A placeholder is a symbol used to indicate that there is nothing in a certain column, as today we write “one hundred” as “100” in numerical form to indicate that there are no tens and no units in the quantity represented.)

In the seventh century CE, the Indian mathematician Brahmagupta (c. 598–665) drew up rules for dealing with zero as a number, not just a placeholder. Hindus adopted this idea in their original binary numbering system, and retained it when they later converted to the decimal system that is now used universally.

The latter notation—including the placeholder—was then taken up by the Arabs, and reached the West in the twelfth century through translations of a treatise on arithmetic by Persian mathematician al-Khwarizmi. The use of zero aroused controversy in the early Christian church, which questioned whether it was right to attribute a value to something that does not exist, and preferred to retain the Roman system (I, V, X, D, C, etc.), in which zero did not feature.

In the modern world, the value of zero remains literally nothing, but its figurative applications are both numerous and useful, particularly in science and mathematics. It is almost ubiquitous, its only notable absence being from the Western calendar, which goes from 1 BCE to 1 CE, with no year zero in between. GL

c. 1500 BCE

The End of the World

Zoroastrian religion

The belief in an inevitable end to the current world order

The angels of the Apocalypse herald the end of the world, in an engraving by Albrecht Dürer from c. 1490.

The idea of the end of the world is usually not about the end of all that there is, but instead about the end of the present order of the world. The concept originated with the Zoroastrian religion, established in c. 1500 BCE, but the most influential exposition of it in the West is the Book of Revelation, written by John of Patmos (fl. first century CE).

The idea is primarily religious in nature, and the branch of theology concerned with the end of the world is called eschatology. The concept is found in a number of religious traditions, including Hinduism, Norse mythology, Judaism, Islam, and Christianity. These traditions differ in what is supposed to follow the end of the world: either a new beginning inaugurating a new cycle of history (as in Norse mythology) or a perfect and unchanging state of affairs on Earth or in Heaven (as in Revelation). They generally associate the end of the world with judgment and redemption, and suppose knowledge of it to be conferred in advance, by prophecy or revelation. That knowledge is then useful in urging people to conduct their behavior with an eye toward their fate when the world actually ends.

“God will invade … when that happens, it is the end of the world.”

C. S. Lewis, The Case for Christianity (1942)

Religious forms of eschatology remain common, and tend to receive extensive news coverage when believers claim the end to be near. In modern times, eschatology sometimes appears in secularized forms, as when a particular socioeconomic system—such as full communism for Karl Marx or liberal democracy for Francis Fukuyama—is regarded as the final stage of human history. The end of the world remains a potent idea in literature and the arts. GB

c. 1500 BCE

Atonement

Ancient Israel

Offering a sacrifice in order to express repentance for wrongdoing

The idea of making amends for a wrongdoing (legal or moral) has been present in every society. The notion has had significant impact through the Hebrew concept of atonement, a feature of the oral origins of the Torah (c. 1500 BCE). Atonement refers to a sacrifice made in repentance to gain forgiveness of sin.

The practice was institutionalized in Hebrew Levitical worship sacrifices, most notably in Yom Kippur, or the Day of Atonement, the holiest day in the Jewish religious year, instituted by Yahweh through Moses. Some commentators say Yom Kippur commemorates Moses’s return from Mount Sinai with the second set of stone tablets as an expression of God’s forgiveness for the exiles’ worshipping the golden calf. In the Hebrew and later Jewish tradition, atonement is associated with any sacrifice for the forgiveness of transgressions against God, but not against others. Jesus Christ’s crucifixion extended this aspect of atonement into the Christian tradition. Christian theologians associate Christ’s death with Levitical sacrifices for sin against God, in particular, the Passover sacrifice.

“If there be laid on him a kofer [atonement payment, ransom], then he shall give …”

Orthodox Jewish Bible, Shemot [Exodus] 21:30

As the Christian tradition took hold in the West, the concept and language of atonement followed, shaping both theological and philosophical perceptions of moral and religious wrongdoing. Subsequently, the meaning of the word “atonement” was broadened to include any amends offered by one disputant to another. However, the word in English has its root in the older English concept of onement, meaning “reconciled.” Thus, to be “at onement” is to have set aside differences or made appeasement. JW

c. 1500 BCE

Atheism

India

Belief that no God, gods, divine beings, or supernatural phenomena exist

Atheism can be described as a range of ideas about the nonexistence of the divine, deities, or gods. In one sense, atheists are those who do not believe that any gods or divine beings exist, or are those who hold no belief in the supernatural. Atheists may also believe that there are no gods, as opposed to holding no beliefs about such existence.

No single originator is credited with having first identified the notion of atheism. However, the Vedas, the oldest scriptures of Hinduism, produced in India between c. 1500 and c. 500 BCE, make the first known references to the rejection of an idea of gods. In the Western world, the ancient Greek poet and philosopher Diagoras of Melos (fl. fifth century BCE) was widely recognized as being the first outspoken atheist, a belief that resulted in him having to flee Athens. The term “atheist” was also broadly applied to early Christians who did not believe in the pagan pantheon of the Roman Empire. However, widespread and public assertions that there were no gods did not become commonplace until after the French Revolution (1787–99). Today, atheism is common in many nations, though rates of non-belief are often difficult to determine precisely.

“Is a man merely a mistake of God’s? Or is God merely a mistake of man?”

Friedrich Nietzsche, philosopher

The idea of gods, the divine, or supernatural agents is often closely related to very basic, driving questions. Who created the universe? How did we come to be here? For the atheist, the answer does not rely upon a supernatural or divine basis. Atheism, though not a uniform set of beliefs or body of doctrine, allows for the possibility that there is no divine, godly answer to our questions. MT

c. 1500 BCE

Agnosticism

India

Belief that it is impossible to know if the supernatural, including God, exists

The agnostic biologist Thomas Huxley (pictured here in c. 1895) was a champion of Charles Darwin.

Agnosticism holds that the nature of God, gods, or supernatural phenomena is such that humanity can never know if they exist or not. It is a statement about what kind of knowledge a person can possess and about what kind of belief is proper or moral to hold. According to the term’s originator, British biologist Thomas Huxley (1825–95), the term describes a method of how people can use their intellect to come to hold, or refuse to hold, any particular belief.

Even though the term “agnosticism” did not come into popular use until Huxley coined it in 1869, the idea has existed for approaching 3,000 years. The earliest known expression of the idea comes from the Hindu Vedas, produced between c. 1500 and 500 BCE, which expressed skepticism at the ability to answer fundamental questions about existence. Ancient Greek philosophers voiced similar opinions about the nature of certainty and knowledge. When Huxley introduced the term, he created it from the Greek roots a, for “without,” and gnosis, for “knowledge.” His belief was that the knowledge of God is unattainable, and a rational person can hold no belief about it.

“Who knows for certain? … None knoweth whence creation has arisen …”

The Rigveda, Hindu scripture

In modern times, people often use “agnostic” to denote those who describe themselves as being unsure about whether a God exists. Yet the existence of the divine is not something agnosticism purports to answer. It expresses skepticism, especially regarding the extent of human comprehension. It is also a statement about the morality of hubris, holding that it is immoral to believe in something that has no basis, or to assert an answer to an unanswerable question. MT

c. 1500 BCE

Samsara

India

The continuous cycle of reincarnation to which all human beings belong

A Dharmachakra (wheel of law) from c. 900, which symbolizes the six realms of existence in samsara.

The concept of samsara was first developed in the Vedas, the oldest scriptures of Hinduism, produced in India between c. 1500 and c. 500 BCE. Though samsara is principally associated with Hinduism and Buddhism, the concept features in other religions such as Jainism and Sikhism, and is often referred to in popular culture.

Samsara means “to flow together” and refers to the cycle of rebirth in which an individual is reincarnated in a succession of lives based upon the karma (a sort of metaphysical record of a person’s moral worth) received for deeds committed during each life. This rebirth is more of a curse than a blessing, though it does offer the opportunity for spiritual cultivation that can bring about release. In Hinduism, this is closely tied to the varna (caste) system: living according to your dharma (duty) can eradicate karma and earn rebirth in a higher caste that is more capable of attaining moksha, the state in which you realize union with Brahman (ultimate reality) and exit the cycle of rebirth. In Buddhism, karma causes a person to be reincarnated as one of six types of beings: humans, gods, demigods, animals, hungry ghosts, and hell-bound beings. Only humans can realize nirvana, the state in which ignorance is vanquished and karma is eliminated so that you may exit the cycle of rebirth upon death.

“Samsaric pleasures are like salt water, the more we indulge, the more we crave.”

Geshe Sonam Rinchen, Thirty-Seven Practices of Bodhisattvas (1997)

The desire to exit samsara is the driving force in many Eastern religions. Reincarnation is taken as a base metaphysical assumption throughout Indian religion and it is the primary justification for the varna system that has structured Indian society for millennia. JM

c. 1500 BCE

Caste System

India

The division of society into four hereditary social classes

The caste system, also known as the varna system, is a hierarchical social structure prevalent in the Hindu nations of India and Nepal. Its origins trace back to the Vedas, the oldest scriptures of Hinduism, produced in India between c. 1500 and c. 500 BCE, and it is a central theme in the 700-verse Bhagavad Gita (c. 100 CE).

“Now I have no caste, no creed, I am no more what I am!”

Kabir, Indian mystic poet

The caste system divides Hindu society into four hereditary social classes. Highest are the Brahmins, who are priests and teachers. Next come the Kshatriyas, who are political leaders and warriors. Third are the Vaishyas, who manage agriculture and commerce. The lowest are the Shudras, who work as servants for the other three castes. Those who are cast out of the varna system are known as “Untouchables” because contact with them was thought to defile the other castes. Hindu texts justify this system based on karma and rebirth. A person’s actions in this life determine their gunas (qualities) in the next: Brahmins are characterized by sattva (intellect), Kshatriyas by rajas (action), Vaishyas by both rajas and tamas (devotion), and Shudras by tamas alone. These gunas predispose a person toward certain types of work, and society functions best when people do the jobs to which they are suited. Each varna has its own spiritual discipline: Brahmins follow jñana (knowledge), Kshatriyas pursue karma (action), Vaishyas practice both karma and bhakti (devotion), while Shudras undertake bhakti. In the twentieth century, Mahatma Gandhi criticized the social injustice of the caste system, and it was reformed as a result of his protests. JM

c. 1500 BCE

Dharma

India

The belief that the universe has an inherent order

A first-century BCE Indian sculpture of the footprints of Buddha. On the soles of the feet are two Dharmachakras (wheels of learning).

The concept of dharma dates back to the Vedas, the oldest scriptures of Hinduism, produced in India between c. 1500 and c. 500 BCE. It is expounded later Hindu texts, such as the epic work Ramayana (500–100 BCE) and the 700-verse Bhagavad Gita (c. 100 CE), and is present in other Asian traditions such as Buddhism, Jainism, and Sikhism.

“Dharma leads to happiness, but happiness cannot lead to dharma.”

Ramayana (500–100 BCE)

Dharma comes from the Sanskrit word for “uphold” or “support.” In Hinduism, dharma refers to the inherent order of things, both in terms of natural laws and social/ethical norms. Karma is a causal force that connects all things in the universe. As a result of this force, everything that a person does affects not only his own future, but also the futures of others. All human beings have a responsibility to maintain the natural order, which is manifested in the caste system of Hindu society. A person’s actions lead to karma, which determines their gunas (traits) and varna (caste), which in turn dictate the moral obligations that individual has to other people (dharma). For example, in the Bhagavad Gita, Arjuna’s dharma as a kshatriya (warrior) obligates him to fight in a war even though he does not want to.

In Buddhism, dharma refers to not only the natural/moral order, but also to the teachings of the Buddha. Dharma determines a person’s duties at various stages of life (ashrama): in youth, a student’s obligation is to learn; in middle age, a householder is expected to promote the good of society; in advanced age, the forest dweller and renunciant are expected to focus on spiritual cultivation.

Dharma is one of the central metaphysical justifications for the caste system in India. The symbolic representation of dharma, the Dharmachakra or “dharma wheel,” appears in the center of the flag of India, representing the idea that truth and virtue should be the guiding principles of the nation. JM

c. 1500 BCE

Meditation

India

Controlling one’s own mind to realize a new mode of consciousness

A carving of the Buddha sitting in dhyani mudra (meditation pose), at the Gal Vihara temple in Polonnaruwa, Sri Lanka, built during the reign of King Parakramabahu the Great (1153–86).

The practice of meditation encompasses a range of techniques that can be used by individuals to cause their mind to experience a different level of consciousness. Meditation can be focused on many different goals, including self-regulation, religious experience, building internal energy sources, and relaxation. Typically, meditation is a practice that involves training the mind to engage in a particular habit of reflections. In some traditions, meditation involves attempting to separate the mind from the other experiences of the body, whereas others emphasize a physical element of meditation by encouraging repetitive action or vocalizations.

“Meditation is the dissolution of thoughts in Eternal awareness.”

Swami Sivananda, Hindu spiritual teacher

Many religious traditions developed practices that were intended to move the individual beyond the experience of the immediate self, and all of these can be considered forms of meditation. The earliest recommendations for the use of meditation can be found in the Vedas, the oldest scriptures of Hinduism, produced in India between c. 1500 and c. 500 BCE, and in ancient Buddhist texts, which promote meditation as essential for a path to enlightenment. In Tibetan Buddhism, meditation is both a path toward inner reflection to know oneself better and a path ultimately to move beyond the limits of the self.

In several traditions, meditation is intended to have a calming effect on the mind, which is why the term is often used nowadays to refer to a range of quiet relaxation techniques that do not necessarily have religious meaning. Even in the modern world, the idea of meditation usually means more than just relaxation, however. Communication with a reality that goes beyond the typically limited experience of consciousness requires that consciousness be transformed in some way. Thus, most religions include a form of prayer that can be considered a kind of meditation. TD

c. 1500 BCE

Chakras

India

Energy-collection centers of the body that are essential for physical and mental health

A Buddhist Thangka painting from Nepal, showing the seven main chakras of the body.

In the traditions of Buddhism and Hinduism, chakras are centers of energy that correspond to parts of the body. A nonphysical life force is said to travel through the body, and each chakra is linked to a part of this force. Chakras exist in the body along a central channel and are connected to vital bodily functions, including consciousness, vision, communication, health, digestion, reproduction, and survival. In the most common understanding of the system, each chakra is associated with a deity, color, bodily organ or set of organs, and a mantra (a transformative sound or syllable). Bringing the energy of the body in line with the central channel and the chakras is possible through meditation, and this process of alignment plays an important role in achieving fulfillment and enlightenment.

“Kundalini [a corporeal energy] will rise and always cleanse the chakras.”

Shri Mataji Nirmala Devi, founder of Sahaja Yoga

The idea of chakras is an ancient one and it is found in Sanskrit documents and in oral traditions of both Buddhism and Hinduism. “Breath channels,” for example, appear in the Hindu Vedas, produced in India between c. 1500 and c. 500 BCE. The idea of a hierarchy of the chakras was introduced later, in eighth-century Buddhist teachings. There is no standard interpretation of chakras in either religious tradition, with chakra systems varying from teacher to teacher within the same religious tradition.

Chakras are essential if we are to understand the body as a system of energy. Two of the five major world religions are built on the idea that human beings are capable of making peace within themselves as a result of energy systems within the body. The chakras are key to unlocking this inner power. TD

c. 1500 BCE

Karma

India

Every action has consequences that go beyond a mere human lifetime

Karma is a law of causality that first appeared in the Upanishads, the sacred texts that expound the Vedas, the oldest scriptures of Hinduism, produced in India between c. 1500 and c. 500 BCE. Karma is also a key concept in Buddhism and Jainism.

The term karma means “action” in Sanskrit, and refers to the idea that every action has a specific set of causes and effects. Ethically, karma is a metaphysical record of a person’s moral worth. When someone commits an evil act, they acquire karma; when someone does good, they acquire merit, which cancels out karma. Karma is linked to samsara (the cycle of reincarnation) because when people die, their karma determines the type of rebirth they will have in the next life. In Hinduism, this is closely tied to the varna (caste) system: a virtuous life eradicates karma and guarantees rebirth in a higher caste that is more capable of attaining moksha, a state of unity between a person’s atman (true self) and Brahman (ultimate reality). In Buddhism, life is characterized by suffering; the goal of spiritual cultivation is to eradicate karma and attain nirvana, a state in which all karma is nullified and a person can exit the cycle of rebirth. In Jainism, expunging all karma leads to moksha, a blissful state of liberation from samsara. In Hinduism and Buddhism, people receive karma only for intentional acts, whereas in Jainism, even unintentional acts can generate karma.

“It is God’s inviolable law that karma never fails to produce its effect.”

Mahatma Gandhi, Indian nationalist leader

Due to the prevalence of Hinduism and Buddhism throughout Asia, karma has become a central moral paradigm. The doctrine of karma has influenced the spiritual beliefs of numerous traditions, including Sikhism, Falun Gong, and Theosophy. JM

c. 1500 BCE

Mantra

India

Sounds, syllables, and words as the source of spiritual transformation

A Mani stone enscribed with the six-syllabled Buddhist mantra of Avalokiteshvara.

One of the primary goals for those who practice Hinduism and Buddhism is to experience a transformation of consciousness through particular acts of the mind and body. A mantra is a vocalized or written repetition of syllables, words, or phrases that helps to focus the mind and body in order to achieve this transformation. In some mantras, the words themselves become an action that can bring about the transformation. The sound or words of a mantra are representative of an ultimate reality that is meaningful beyond the understanding of the person who is pronouncing them. By performing a mantra, a person is able to place their mind and will in line with the ultimate reality.

The most recognizable mantra is the sound or syllable “Om.” According to the the Upanishads—part of the Hindu Vedas, written between c. 1500 and c. 500 BCE—the syllable “Om” represents all of creation. Meditating while uttering this syllable brings the subject closer to realizing the connectedness of all things in the universe. Mantras are also meaningful in the Buddhist tradition, in which they have been expanded beyond vocalized sounds to include written language and characters. As Buddhism spread to China, the writing of mantras became more important as a form of meditation. In either form, vocalized or written, repetition of mantras is a common form of meditating on their fundamental truth.

Загрузка...