• CHAPTER XIV •


THE STAIRS


I

We now come to the most dangerous part of the house—in fact, one of the most hazardous environments anywhere: the stairs. No one knows exactly how dangerous the stairs are, because records are curiously deficient. Most countries keep records of deaths and injuries sustained in falls, but not of what caused the falls in the first place. So in the United States, for instance, it is known that about twelve thousand people a year hit the ground and never get up again, but whether that is because they have fallen from a tree, a roof, or off the back porch is unknown. In Britain, fairly scrupulous stair-fall figures were kept until 2002, but then the Department for Trade and Industry decided that keeping track of these things was an extravagance it could no longer afford, which seems a fairly misguided economy, considering how much fall injuries cost society. The last set of figures indicated that a rather whopping 306,166 Britons were injured seriously enough in stair falls to require medical attention that year, so it is clearly more than a trifling matter.

John A. Templer of the Massachusetts Institute of Technology, author of the definitive (and, it must be said, almost only) scholarly text on the subject, The Staircase: Studies of Hazards, Falls, and Safer Design, suggests that all fall-injury figures are probably severely underestimated anyway. Even on the most conservative calculations, however, stairs rank as the second most common cause of accidental death, well behind car accidents, but far ahead of drownings, burns, and other similarly grim misfortunes. When you consider how much falls cost society in lost working hours and the strains placed on health systems, it is curious that they are not studied more attentively. Huge amounts of money and bureaucratic time are invested in fire prevention, fire research, fire codes, and fire insurance, but almost none is spent on the understanding or prevention of falls.

“Perspective of a staircase” by Thomas Malton (photo credit 14.1)

Everybody trips on stairs at some time or other. It has been calculated that you are likely to miss a step once in every 2,222 occasions you use stairs, suffer a minor accident once in every 63,000 uses, suffer a painful accident once in every 734,000, and need hospital attention once every 3,616,667 uses.

Eighty-four percent of people who die in stair falls at home are sixty-five or older. This is not so much because the elderly are more careless on stairs, but just because they don’t get up so well afterward. Children, happily, only very rarely die in falls on stairs, though households with young children in them have by far the highest rates of injuries, partly because of high levels of stair usage and partly because of the startling things children leave on steps. Unmarried people are more likely to fall than married people, and previously married people fall more than both of those. People in good shape fall more often than people in bad shape, largely because they do a lot more bounding and don’t descend as carefully and with as many rest stops as the tubby or infirm.

The best indicator of personal risk is whether you have fallen much before. Accident proneness is a slightly controversial area among stair-injury epidemiologists, but it does seem to be a reality. About four persons in ten injured in a stair fall have been injured in a stair fall before.

People fall in different ways in different countries. Someone in Japan, for instance, is far more likely to be hurt in a stair fall in an office, department store, or railway station than is anyone in the United States. This is not because the Japanese are more reckless stair users, but simply because Americans don’t much use stairs in public environments. They rely on the ease and safety of elevators and escalators. American stair injuries overwhelmingly happen in the home—almost the only place where many Americans submit themselves to regular stair use. For the same reason, women are far more likely to fall down stairs than men: they use stairs more, especially at home, where falls most commonly occur.

When we fall on stairs, we tend to blame ourselves and generally attribute the fall to carelessness or inattentiveness. In fact, design substantially influences the likelihood of whether you will fall, and how hurt you will feel when you have stopped bouncing. Poor lighting, absence of handrails, confusing patterns on the treads, risers that are unusually high or low, treads that are unusually wide or narrow, and landings that interrupt the rhythm of ascent or descent are the principal design faults that lead to accidents.

According to Templer, stair safety is not one problem but two: “avoiding the circumstances that cause accidents and designing stairs that will minimize injuries if an accident occurs.” He notes how at one New York City railroad station (he doesn’t say which) the stair edges had been given a nonslip covering with a pattern that made it difficult to discern the stair edge. In six weeks, more than fourteen hundred people—a truly astonishing number—fell down these stairs, at which point the problem was fixed.

Stairs incorporate three pieces of geometry: rise, going, and pitch. The rise is the height between steps, the going is the step itself (technically, the distance between the leading edges, or nosings, of two successive steps measured horizontally), and the pitch is the overall steepness of the stairway. Humans have a fairly narrow tolerance for differing pitches. Anything more than 45 degrees is uncomfortably taxing to walk up, and anything less than 27 degrees is tediously slow. It is surprisingly hard to walk on steps that don’t have much pitch, so our zone of comfort is a small one. An inescapable problem with stairs is that they have to convey people safely in both directions, whereas the mechanics of locomotion require different postures in each direction. (You lean into the stairs when climbing, but hold your center of gravity back in descent, as if applying a brake.) So stairs that are safe and comfortable in the ascent may not be so good for going down, and vice versa. How far the nosing projects outward from the tread, for one thing, can materially affect the likelihood of a mishap. In a perfect world, stairs would change shape slightly depending on whether a user was going up or down them. In practice, every staircase is a compromise.

Let’s look at a fall in slow motion. Descending a staircase is in a sense a controlled fall. You are propelling your body outward and downward in a manner that would clearly be dangerous if you weren’t fully on top of things. The problem for the brain is distinguishing the moment when a descent stops being controlled and starts being a kind of unhappy mayhem. The human brain responds very quickly to danger and disarray, but it still takes a fraction of a moment—190 milliseconds to be precise—for the reflexes to kick in and for the mind to assimilate that something is going wrong (that you have just stepped on a skate, say) and to clear the decks for a tricky landing. During this exceedingly brief interval the body will descend, on average, seven more inches—too far, generally, for a graceful landing. If this event happens on the bottom step you come down with an unpleasant jolt—more of an affront to your dignity than anything else. But if it happens higher up, your feet simply won’t be able to make a stylish recovery, and you had better hope that you can catch the handrail—or indeed that there is a handrail. One study in 1958 found that in three-quarters of all stair falls no handrail was available at the point of the fall’s origin.

The two times to take particular care on staircases are at the beginning and end. As many as one-third of all stair accidents occur on the first or last step, and two-thirds occur on the first or last three steps. The most dangerous circumstance of all is having a single step in an unexpected place. Nearly as dangerous are stairs with four or fewer risers. They seem to inspire overconfidence.

Not surprisingly, going downstairs is much more dangerous than going up. Over 90 percent of injuries occur during descent. The chances of having a “severe” fall are 57 percent on straight flights of stairs, but only 37 percent on stairs with a dogleg. Landings, too, need to be of a certain size—the width of a step plus the width of a stride is considered about right—if they are not to break the rhythm of the stair user. A broken rhythm is a prelude to a fall.

For a long time it was recognized that people going up and down steps appreciate being able to do so with a certain rhythm, and that this instinct could most readily be satisfied by having broad treads on short climbs and narrower treads on steeper climbs. Classical writers on architecture had surprisingly little to say on the design of stairs, however. Vitruvius merely suggested that stairs should be well lighted. His concern was not to reduce the risk of falls but to keep people moving in opposite directions from colliding (another reminder of just how dark it could be in the pre-electric world). It wasn’t until the late seventeenth century that a Frenchman named François Blondel devised a formula that mathematically fixed the relationship between riser and tread. Specifically, he suggested that for every unit of increased height the depth of tread should be decreased by two units. The formula was widely adopted and even now, more than three hundred years later, remains enshrined in many building codes even though it doesn’t actually work very well—or indeed at all—on stairs that are either unusually high or unusually low.

In modern times, the person who took the design of stairs most seriously was, surprisingly, Frederick Law Olmsted. Although almost nothing in his work required it of him, Olmsted measured risers and treads fastidiously—sometimes obsessively—for nine years in an attempt to arrive at a formula that ensured staircase comfort and safety in both directions. His findings were converted into a pair of equations by a mathematician named Ernest Irving Freese. They are:

and

The first, I am told, is for when the going is fixed, and the second for when it is not.

In our own time, Templer suggests that risers should be between 6.3 inches and 7.2 inches, and that goings should never be less than 9.0 inches, but ought to be more like 11.0, but if you look around you will see that there is huge variability. In general, according to the Encyclopaedia Britannica, U.S. steps tend to be slightly higher, per unit of tread, than British ones, and European ones higher still, but it doesn’t quantify the statement.

In terms of the history of stairs, not a great deal can be said. No one knows where stairs originated or when, even roughly. The earliest, however, may not have been designed to convey people upward to an upper story, as you might expect, but rather downward, into mines. In 2004, the most ancient wooden staircase yet found, dating from about three thousand years ago, was discovered a hundred meters underground in a Bronze Age salt mine at Hallstatt in Austria. It was possibly the first environment in which an ability to ascend and descend by foot alone (as opposed to a ladder, where hands are needed, too) was a positive and necessary advantage since it would leave both arms free to carry heavy loads.

In passing, one linguistic curiosity is worth noting. As nouns, upstairs and downstairs are surprisingly recent additions to the language. Upstairs isn’t recorded in English until 1842 (in a novel called Handy Andy by one Samuel Lover), and downstairs is first seen the following year in a letter written by Jane Carlyle. In both cases, the context makes clear that the words were already in existence—Jane Carlyle was no coiner of terms—but no earlier written records have yet been found. The upshot is that for at least three centuries people lived on multiple floors yet had no convenient way of expressing it.


II

While we are on the topic of how our houses can hurt us, we might pause on the landing for a moment and consider one other architectural element that has throughout history proved lethal to a startlingly large number of people: the walls, or more specifically the things that go on the walls, namely, paint and wallpaper. For a very long time both were, in various ways, robustly harmful.

Consider wallpaper, a commodity that was just becoming popular in ordinary homes at the time Mr. Marsham built his rectory. For a long time wallpaper—or “stained paper,” as it was still sometimes called—had been very expensive. It was heavily taxed for over a century, but it was also extremely labor-intensive to make. It was made not from wood pulp, but from old rags. Sorting through rags was a dirty job that exposed the sorters to a range of infectious diseases. Until the invention of a machine that could create continuous lengths of paper in 1802, the maximum size of each sheet was only two feet or so, which meant that paper had to be joined with great skill and care. The Countess of Suffolk paid £42 to wallpaper a single room at a time (the 1750s) when a good London house cost just £12 a year to rent. Flocked wallpaper, made from dyed stubbles of wool stuck to the surface of wallpaper, became wildly fashionable after about 1750 but presented additional dangers to those involved in its manufacture, as the glues were often toxic.

When the wallpaper tax was finally lifted, in 1830, wallpaper really took off (or perhaps I should say really went on). The number of rolls sold in Britain leaped from one million in 1830 to thirty million in 1870, and this was when it really started to make a lot of people sick. From the outset wallpaper was often colored with pigments that used large doses of arsenic, lead, and antimony, but after 1775 it was frequently soaked in an especially insidious compound called copper arsenite, which was invented by the great but wonderfully hapless Swedish chemist Karl Scheele.* The color was so popular that it became known as Scheele’s green. Later, with the addition of copper acetate, it was refined into an even richer pigment known as emerald green. This was used to color all kinds of things—playing cards, candles, clothing, curtain fabrics, and even some foods. But it was especially popular in wallpaper. This was dangerous not only to the people who made or hung the wallpaper but also to those who lived with it afterward.



By the late nineteenth century, 80 percent of English wallpapers contained arsenic, often in very significant quantities. A particular enthusiast was the designer William Morris, who not only loved rich arsenic greens but was on the board of directors of (and heavily invested in) a company in Devon that made arsenic-based pigments. Especially when damp was present—and in English homes it seldom was not—the wallpaper gave off a peculiar musty smell that reminded many people of garlic. Homeowners noticed that bedrooms with green wallpapers usually had no bedbugs. It has also been suggested that poisonous wallpaper could well account for why a change of air was so often beneficial for the chronically ill. In many cases they were doubtless simply escaping a slow poisoning. One such victim was Frederick Law Olmsted, a man we seem to be encountering more often than might be expected. He suffered apparent arsenic poisoning from bedroom wallpaper in 1893, at just the time people were finally figuring out what was making them unwell in bed, and needed an entire summer of convalescence—in another room.

Paints were surprisingly dangerous, too. The making of paints involved the mixing of many toxic products—in particular lead, arsenic, and cinnabar (a cousin of mercury). Painters commonly suffered from a vague but embracing malady called painters’ colic, which was essentially lead poisoning with a flourish. Painters purchased white lead as a block, then ground it to a powder, usually by rolling an iron ball over it. This got a lot of dust onto their fingers and into the air, and the dust so created was highly toxic. Among the many symptoms painters tended to come down with were palsies, racking cough, lassitude, melancholy, loss of appetite, hallucinations, and blindness. One of the quirks of lead poisoning is that it causes an enlargement of the retina that makes some victims see halos around objects—an effect Vincent van Gogh famously exploited in his paintings. It is probable that he was suffering lead poisoning himself. Artists often did. One of those made seriously ill by white lead was James McNeill Whistler, who used a lot of it in creating the life-sized painting The White Girl.

Today lead paint is banned almost everywhere except for certain very specific applications,* but it is much missed by conservators because it gave a depth of color and a mellow air that modern paints really can’t match. Lead paint looks especially good on wood.

• • •

Painting also involved many problems of demarcation. Who was allowed to do what in England was very complicated, thanks to the system of craft guilds, which meant that some practitioners could apply paint, some could apply distemper (a kind of thin paint), and some could do neither. Painters did most of the painting, as you would naturally expect, but plasterers were allowed to apply distemper to plastered walls—but only a few shades. Plumbers and glaziers, by contrast, could apply oil paints but not distemper. The reason for this is slightly uncertain, but it is probably attached to the fact that window frames were often made of lead—a material in which both plumbers and glaziers specialized.

Distemper was made from a mixture of chalk and glue. It had a softer, thinner sheen that was ideal for plastered surfaces. By the mid-eighteenth century, distempers normally covered walls and ceilings and heavier oil paints covered the woodwork. Oil paints were a more complex proposition. They consisted of a base (usually lead carbonate, or “white lead”), a pigment for color, a binder such as linseed oil to make it stick, and thickening agents like wax or soap, which is slightly surprising because eighteenth-century oil paints were already pretty glutinous and difficult to apply—“like spreading tar with a broom,” in the words of the writer David Owen. It wasn’t until someone discovered that adding turpentine, a natural thinner distilled from the sap of pine trees, made the paint easier to apply that painting became smoother in every sense. Turpentine also gave paint a matte finish, and this became a fashionable look by the late eighteenth century.

Linseed oil was the magical ingredient in paint, because it hardened into a tough film—essentially made paint paint. Linseed oil is squeezed from the seeds of flax, the plant from which linen comes (which is why flaxseeds are also called linseeds). Its one dramatic downside was that it is extremely combustible—a pot of linseed oil could in the right conditions burst into flame spontaneously—and so almost certainly was the source of many devastating house fires. It had to be used with special caution in the presence of open flames.

The most elementary finish of all was limewash, or whitewash, which was generally applied to more basic areas, like service rooms and servants’ quarters. Whitewash was just a simple mix of quicklime and water (sometimes mixed with tallow to enhance adhesion); it didn’t last long, but it did have the practical benefit of acting as a disinfectant. Despite the name whitewash, it was often tinted (if rather feebly) with coloring agents.

Painting was especially skillful because painters ground their own pigments and mixed their own paints—in other words created their own colors—and generally did so in great secrecy in order to maintain a commercial advantage over their rivals. (Add resins to linseed oil instead of pigment, and you get varnish. Painters made their varnishes in great secrecy, too.) Paint had to be mixed in small portions and used at once, so painters had to be able to make matching batches from day to day. They also had to apply several coats, since even the best paints had little opacity. Covering a wall usually took at least five coats, so painting was a big, disruptive, and fairly technical undertaking.

Pigments varied in price significantly. Duller colors, like off-white and stone, could be had for four or five pence a pound. Blues and yellows were two to three times as expensive, and so tended to be used only by the middle classes and above. Smalt, a shade of blue made with ground glass (which gave a glittery effect), and azurite, made from a semiprecious stone, were dearer still. The most expensive of all was verdigris, which was made by hanging copper strips over a vat of horse dung and vinegar and then scraping off the oxidized copper that resulted. It is the same process that turns copper domes and statues green—just quicker and more commercial—and it made “the delicatest Grass-green in the world,” as one eighteenth-century admirer enthused. A room painted in verdigris always produced an appreciative “ah” in visitors.

When paints became popular, people wanted them to be as vivid as they could possibly be made. The restrained colors that we associate with the Georgian period in Britain, or the colonial period in America, are a consequence of fading, not decorative restraint. In 1979, when Mount Vernon began a program of repainting the interiors in faithful colors, “people came and just yelled at us,” Dennis Pogue, the curator, told me with a grin. “They told us we were making Mount Vernon garish. They were right—we were. But that’s just because that’s the way it was. It was hard for a lot of people to accept that what we were doing was faithful restoration.

“Even now paint charts for colonial-style paints virtually always show the colors from the period as muted. In fact, colors were actually nearly always quite deep and sometimes even startling. The richer a color you could get, the more you tended to be admired. For one thing, rich colors generally denoted expense, since you needed a lot of pigment to make them. Also, you need to remember that often these colors were seen by candlelight, so they needed to be more forceful to have any kind of impact in muted light.”

The effect is now repeated at Monticello, where several of the rooms are of the most vivid yellows and greens. Suddenly George Washington and Thomas Jefferson come across as having the decorative instincts of hippies. In fact, however, compared with what followed they were exceedingly restrained.

When the first ready-mixed paints came onto the market in the second half of the nineteenth century, people slapped them on with something like wild abandon. It became fashionable to have not just powerfully bright colors in the home but as many as seven or eight colors in a single room.

If we looked closely, however, we would be surprised to note that two very basic colors didn’t exist at all in Mr. Marsham’s day: a good white and a good black. The brightest white available was a rather dull off-white, and although whites improved through the nineteenth century, it wasn’t until the 1940s, with the addition of titanium dioxide to paints, that really strong, lasting whites became available. The absence of a good white paint would have been doubly noticeable in early New England, for the Puritans had no white paint and didn’t believe in painting anyway. (They thought it was showy.) So all those gleaming white churches we associate with New England towns are in fact a comparatively recent phenomenon.

Also missing from the painter’s palette was a strong black. Permanent black paint, distilled from tar and pitch, wasn’t popularly available until the late nineteenth century. So all the glossy black front doors, railings, gates, lampposts, gutters, downpipes, and other fittings that are such an elemental feature of London’s streets today are actually quite recent. If we were to be thrust back in time to Dickens’s London, one of the most startling differences to greet us would be the absence of black-painted surfaces. In the time of Dickens, almost all ironwork was green, light blue, or dull gray.

Now we may proceed up the stairs to a room that may never actually have killed anyone but has probably been the seat of more suffering and despair than all the other rooms of the house put together.


* Scheele independently discovered eight elements—chlorine, fluorine, manganese, barium, molybdenum, tungsten, nitrogen, and oxygen—but received credit for none of them in his lifetime. He had an unfortunate habit of tasting every substance he worked with, as a way of familiarizing himself with its properties, and eventually the practice caught up with him. In 1786, he was found slumped at his workbench, dead from an accidental overdose.

* Although lead’s dangers have been well known for a long time, it continued to be used in many products well into the twentieth century. Food came in cans sealed with lead solder. Water was often stored in lead-lined tanks. Lead was sprayed onto fruit as a pesticide. Lead was even used in the manufacture of toothpaste tubes. It was banned from domestic paints in the United States in 1978. Although lead has been removed from most consumer products, it continues to build up in the atmosphere because of industrial applications. The average person of today has about 625 times more lead in his system than someone of fifty years ago.

Загрузка...