References and remarks, to try and convince you all I’m not crazy (or, failing that, to simply intimidate you into shutting up about it). Read for extra credit.
I’m hardly the first author to take a stab at rationalising vampirism in purely biological terms. Richard Matheson did it before I was born, and if the grapevine’s right that damn Butler woman’s latest novel will be all over the same territory before you even read this. I bet I’m the first to come up with the Crucifix Glitch to explain the aversion to crosses, though — and once struck by that bit of inspiration, everything else followed.
Vampires were accidentally rediscovered when a form of experimental gene therapy went curiously awry, kick-starting long-dormant genes in an autistic child and provoking a series of (ultimately fatal) physical and neurological changes. The company responsible for this discovery presented its findings after extensive follow-up studies on inmates of the Texas penal system; a recording of that talk, complete with visual aids, is available online;[1] curious readers with half an hour to kill are refered there for details not only on vampire biology, but on the research, funding, and “ethical and political concerns” regarding vampire domestication (not to mention the ill-fated “Taming Yesterday’s Nightmares For A Brighter Tomorrow” campaign). The following (much briefer) synopsis restricts itself to a few biological characteristics of the ancestral organism:
Homo sapiens vampiris was a short-lived Human subspecies which diverged from the ancestral line between 800,000 and 500,000 year BP. More gracile than either neandertal or sapiens, gross physical divergence from sapiens included slight elongation of canines, mandibles, and long bones in service of an increasingly predatory lifestyle. Due to the relatively brief lifespan of this lineage, these changes were not extensive and overlapped considerably with conspecific allometries; differences become diagnostically significant only at large sample sizes (N›130).
However, while virtually identical to modern humans in terms of gross physical morphology, vampiris was radically divergent from sapiens on the biochemical, neurological, and soft-tissue levels. The GI tract was foreshortened and secreted a distinct range of enzymes more suited to a carnivorous diet. Since cannibalism carries with it a high risk of prionic infection,[2] the vampire immune system displayed great resistance to prion diseases,[3] as well as to a variety of helminth and anasakid parasites. Vampiris hearing and vision were superior to that of sapiens; vampire retinas were quadrochromatic (containing four types of cones, compared to only three among baseline humans); the fourth cone type, common to nocturnal predators ranging from cats to snakes, was tuned to near-infrared. Vampire grey matter was “underconnected” compared to Human norms due to a relative lack of interstitial white matter; this forced isolated cortical modules to become self-contained and hypereffective, leading to omnisavantic pattern-matching and analytical skills.[4]
Virtually all of these adaptations are cascade effects that — while resulting from a variety of proximate causes — can ultimately be traced back to a paracentric inversion mutation on the Xq21.3 block of the X-chromosome.[5] This resulted in functional changes to genes coding for protocadherins (proteins that play a critical role in brain and central nervous system development). While this provoked radical neurological and behavioral changes, significant physical changes were limited to soft tissue and microstructures that do not fossilise. This, coupled with extremely low numbers of vampire even at peak population levels (existing as they did at the tip of the trophic pyramid) explains their virtual absence from the fossil record.
Significant deleterious effects also resulted from this cascade. For example, vampires lost the ability to code for e-Protocadherin Y, whose genes are found exclusively on the hominid Y chromosome.[6] Unable to synthesise this vital protein themselves, vampires had to obtain it from their food. Human prey thus comprised an essential component of their diet, but a relatively slow-breeding one (a unique situation, since prey usually outproduce their predators by at least an order of magnitude). Normally this dynamic would be utterly unsustainable: vampires would predate humans to extinction, and then die off themselves for lack of essential nutrients.
Extended periods of lungfish-like dormancy[7] (the so-called “undead” state) — and the consequent drastic reduction in vampire energetic needs — developed as a means of redressing this imbalance. To this end vampires produced elevated levels of endogenous Ala-(D) Leuenkephalin (a mammalian hibernation-inducing peptide[8]) and dobutamine, which strengthens the heart muscle during periods on inactivity.[9]
Another deleterious cascade effect was the so-called “Crucifix Glitch” — a cross-wiring of normally-distinct receptor arrays in the visual cortex,[10] resulting in grand mal-like feedback siezures whenever the arrays processing vertical and horizontal stimuli fired simultaneously across a sufficiently large arc of the visual field. Since intersecting right angles are virtually nonexistent in nature, natural selection did not weed out the Glitch until H. sapiens sapiens developed Euclidean architecture; by then, the trait had become fixed across H. sapiens vampiris via genetic drift, and — suddenly denied access to its prey — the entire subspecies went extinct shortly after the dawn of recorded history.
You’ll have noticed that Jukka Sarasti, like all reconstructed vampires, sometimes clicked to himself when thinking. This is thought to hail from an ancestral language, which was hardwired into a click-speech mode more than 50,000 years BP. Click-based speech is especially suited to predators stalking prey on savannah grasslands (the clicks mimic the rustling of grasses, allowing communication without spooking quarry).[11] The Human language most closely akin to Old Vampire is Hadzane.[12]
The Human sensorium is remarkably easy to hack; our visual system has been described as an improvised “bag of tricks”[13] at best. Our sense organs acquire such fragmentary, imperfect input that the brain has to interpret their data using rules of probability rather than direct perception.[14] It doesn’t so much see the world as make an educated guess about it. As a result, “improbable” stimuli tends to go unprocessed at the conscious level, no matter how strong the input. We tend to simply ignore sights and sound that don’t fit with our worldview.
Sarasti was right: Rorschach wouldn’t do anything to you that you don’t already do to yourself.
For example, the invisibility trick of that young, dumb scrambler — the one who restricted its movement to the gaps in Human vision — occured to me while reading about something called inattentional blindness. A Russian guy called Yarbus was the first to figure out the whole saccadal glitch in Human vision, back in the nineteen sixties.[15] Since then, a variety of researchers have made objects pop in and out of the visual field unnoticed, conducted conversations with hapless subjects who never realised that their conversational partner had changed halfway through the interview, and generally proven that the Human brain just fails to notice an awful lot of what’s going on around it.[16], [17], [18] Check out the demos at the website of the Visual Cognition Lab at the University of Illinois[19] and you’ll see what I mean. This really is rather mind-blowing, people. There could be Scientologists walking among us right now and if they moved just right, we’d never even see them.
Most of the psychoses, syndromes, and hallucinations described herein are real, and are described in detail by Metzinger,[20] Wegner,[21] and/or Saks[22] (see also Sentience/Intelligence, below). Others (e.g. Grey Syndrome) have not yet made their way into the DSM[23] — truth be told, I invented a couple — but are nonetheless based on actual experimental evidence. Depending upon whom you believe, the judicious application of magnetic fields to the brain can provoke everything from religious rapture[24] to a sense of being abducted by aliens.[25] Transcranial magnetic stimulation can change mood, induce blindness,[26] or target the speech centers (making one unable to pronounce verbs, for example, while leaving the nouns unimpaired).[27] Memory and learning can be enhanced (or impaired), and the US Government is presently funding research into wearable TMS gear for — you guessed it — military purposes.[28]
Sometimes electrical stimulation of the brain induces “alien hand syndrome” — the involuntary movement of the body against the will of the “person” allegedly in control.[29] Other times it provokes equally involuntary movements, which subjects nonetheless insist they “chose” to perform despite overwhelming empirical evidence to the contrary.[30] Put all this together with the fact that the body begins to act before the brain even “decides” to move[31] (but see [32], [33]), and the whole concept of free will — despite the undeniable subjective feeling that it’s real — begins to look a teeny bit silly, even outside the influence of alien artefacts.
While electromagnetic stimulation is currently the most trendy approach to hacking the brain, it’s hardly the only one. Gross physical disturbances ranging from tumors[34] to tamping irons[35] can turn normal people into psychopaths and pedophiles (hence that new persona sprouting in Susan James’s head). Spirit possession and rapture can be induced through the sheer emotional bump-and-grind of religious rituals, using no invasive neurological tools at all (and not even necessarily any pharmacological ones)21. People can even develop a sense of ownership of body parts that aren’t theirs, can be convinced that a rubber hand is their real one.[36] Vision trumps propioreception: a prop limb, subtly manipulated, is enough to convince us that we’re doing one thing while in fact we’re doing something else entirely.[37], [38]
The latest tool in this arsenal is ultrasound: less invasive than electromagnetics, more precise than charismatic revival, it can be used to boot up brain activity[39] without any of those pesky electrodes or magnetic hairnets. In Blindsight it serves as a convenient back door to explain why Rorschach’s hallucinations persist even in the presence of Faraday shielding — but in the here and now, Sony has been renewing an annual patent for a machine which uses ultrasonics to implant “sensory experiences” directly into the brain.[40] They’re calling it an entertainment device with massive applications for online gaming. Uh huh. And if you can implant sights and sounds into someone’s head from a distance, why not implant political beliefs and the irresistable desire for a certain brand of beer while you’re at it?
The “telematter” drive that gets our characters to the story is based on teleportation studies reported in Nature,[41] Science,[42], [43] Physical Review Letters,[44] and (more recently) everyone and their dog.e.g., [45] The idea of transmitting antimatter specs as a fuel template is, so far as I know, all mine. To derive plausible guesses for Theseus’s fuel mass, accelleration, and travel time I resorted to The Relativistic Rocket,[46] maintained by the mathematical physicist John Baez at UC Riverside. Theseus’ use of magnetic fields as radiation shielding is based on research out of MIT.[47] I parked the (solar powered) Icarus Array right next to the sun because the production of antimatter is likely to remain an extremely energy-expensive process for the near future.[48], [49]
The undead state in which Theseus carries her crew is, of course, another iteration of the venerable suspended animation riff (although I’d like to think I’ve broken new ground by invoking vampire physiology as the mechanism). Two recent studies have put the prospect of induced hibernation closer to realization. Blackstone et al. have induced hibernation in mice by the astonishingly-simple expedient of exposing them to hydrogen sulfide;[50] this gums up their cellular machinery enough to reduce metabolism by 90%. More dramatically (and invasively), researchers at Safar Center for Resuscitation Research in Pittsburgh claim[51] to have resurrected a dog three hours after clinical death, via a technique in which the animal’s blood supply was replaced by an ice-cold saline solution.[52] Of these techniques, the first is probably closer to what I envisioned, although I’d finished the first draft before either headline broke. I considered rejigging my crypt scenes to include mention of hydrogen sulfide, but ultimately decided that fart jokes would have ruined the mood.
Blindsight describes Big Ben as an “Oasa Emitter”. Officially there’s no such label, but Yumiko Oasa has reported finding hitherto-undocumented infrared emitters[53], [54] — dimmer than brown dwarves, but possibly more common[55], [56] — ranging in mass from three to thirteen Jovian masses. My story needed something relatively local, large enough to sustain a superJovian magnetic field, but small and dim enough to plausibly avoid discovery for the next seventy or eighty years. Oasa’s emitters suit my needs reasonably well (notwithstanding some evident skepticism over whether they actually exist[57]).
Of course I had to extrapolate on the details, given how little is actually known about these beasts. To this end I pilfered data from a variety of sources on gas giants[58], [59], [60], [61], [62], [63], [64] and/or brown dwarves,[65], [66], [67], [68], [69], [70], [71], [72], [73], [74], [75] scaling up or down as appropriate. From a distance, the firing of Rorschach’s ultimate weapon looks an awful lot like the supermassive x-ray and radio flare recently seen erupting from a brown dwarf that should have been way too small to pull off such a trick.[76] That flare lasted twelve hours, was a good billions times as strong as anything Jupiter ever put out, and is thought to have resulted from a twisted magnetic field.[77]
Burns-Caulfield is based loosely on 2000 Cr105, a trans-Newtonian comet whose present orbit cannot be completely explained by the gravitational forces of presently-known objects in the solar system.[78]
Like many others, I am weary of humanoid aliens with bumpy foreheads, and of giant CGI insectoids that may look alien but who act like rabid dogs in chitin suits. Of course, difference for its own arbitrary sake is scarcely better than your average saggital-crested Roddennoid; natural selection is as ubiquitous as life itself, and the same basic processes will end up shaping life wherever it evolves. The challenge is thus to create an “alien” that truly lives up to the word, while remaining biologically plausible.
Scramblers are my first shot at meeting that challenge — and given how much they resemble the brittle stars found in earthly seas, I may have crapped out on the whole unlike-anything-you’ve-ever-seen front, at least in terms of gross morphology. It turns out that brittle stars even have something akin to the scrambler’s distributed eyespot array. Similarly, scrambler reproduction — the budding of stacked newborns off a common stalk — takes its lead from jellyfish. You can take the marine biologist out of the ocean, but…
Fortunately, scramblers become more alien the closer you look at them. Cunningham remarks that nothing like their time-sharing motor/sensory pathways exists on Earth. He’s right as far as he goes, but I can cite a precursor that might conceivably evolve into such an arrangement. Our own “mirror neurons” fire not only when we perform an action, but when we observe someone else performing the same action;[79] this characteristic has been cited in the evolution of both language and of consciousness.[80], [81], [82]
Things look even more alien on the metabolic level. Here on Earth anything that relied solely on anaerobic ATP production never got past the single-cell stage. Even though it’s more efficient than our own oxygen-burning pathways, anaerobic metabolism is just too damn slow for advanced multicellularity.[83] Cunningham’s proposed solution is simplicity itself. The catch is, you have to sleep for a few thousand years between shifts.
The idea of quantum-mechanical metabolic processes may sound even wonkier, but it’s not. Wave-particle duality can exert significant impacts on biochemical reactions under physiological conditions at room temperature;[84] heavy-atom carbon tunnelling has been reported to speed up the rate of such reactions by as much as 152 orders of magnitude.[85]
And how’s this for alien: no genes. The honeycomb example I used by way of analogy originally appeared in Darwin’s little-known treatise[86] (damn but I’ve always wanted to cite that guy); more recently, a small but growing group of biologists have begun spreading the word that nucleic acids (in particular) and genes (in general) have been seriously overrated as prerequisites to life.[87], [88] A great deal of biological complexity arises not because of genetic programming, but through the sheer physical and chemical interaction of its components.[89], [90], [91], [92] Of course, you still need something to set up the initial conditions for those processes to emerge; that’s where the magnetic fields come in. No candy-ass string of nucleotides would survive in Rorschach’s environment anyway.
The curious nitpicker might be saying “Yeah, but without genes how do these guys evolve? How to they adapt to novel environments? How, as a species, do they cope with the unexpected?” And if Robert Cunningham were here today, he might say, “I’d swear half the immune system is actively targetting the other half. It’s not just the immune system, either. Parts of the nervous system seem to be trying to, well, hack each other. I think they evolve intraorganismally, as insane as that sounds. The whole organism’s at war with itself on the tissue level, it’s got some kind of cellular Red Queen thing happening. Like setting up a colony of interacting tumors, and counting on fierce competition to keep any one of them from getting out of hand. Seems to serve the same role as sex and mutation does for us.” And if you rolled your eyes at all that doubletalk, he might just blow smoke in your face and refer to one immunologist’s interpretation of exactly those concepts, as exemplified in (of all things) The Matrix Revolutions.[93] He might also point out that that the synaptic connections of your own brain are shaped by a similar kind of intraorganismal natural selection,[94] one catalysed by bits of parasitic DNA call retrotransposons.
Cunningham actually did say something like that in an earlier draft of this book, but the damn thing was getting so weighed down with theorising that I just cut it. After all, Rorschach is the proximate architect of these things, so it could handle all that stuff even if individual scramblers couldn’t. And one of Blindsight’s take-home messages is that life is a matter of degree — the distinction between living and non-living systems has always been an iffy one,[95], [96], [97] never more so than in the bowels of that pain-in-the-ass artefact out in the Oort.
This is the heart of the whole damn exercise. Let’s get the biggies out of the way first. Metzinger’s Being No One[20] is the toughest book I’ve ever read (and there are still significant chunks of it I haven’t), but it also contains some of the most mindblowing ideas I’ve encountered in fact or fiction. Most authors are shameless bait-and-switchers when it comes to the nature of consciousness. Pinker calls his book How the Mind Works,[98] then admits on page one that “We don’t understand how the mind works”. Koch (the guy who coined the term “zombie agents”) writes The Quest for Consciousness: A Neurobiological Approach,[99] in which he sheepishly sidesteps the whole issue of why neural activity should result in any kind of subjective awareness whatsoever.
Towering above such pussies, Metzinger takes the bull by the balls. His “World-zero” hypothesis not only explains the subjective sense of self, but also why such an illusory first-person narrator would be an emergent property of certain cognitive systems in the first place. I have no idea whether he’s right — the man’s way beyond me — but at least he addressed the real question that keeps us staring at the ceiling at three a.m., long after the last roach is spent. Many of the syndromes and maladies dropped into Blindsight I first encountered in Metzinger’s book. Any uncited claims or statements in this subsection probably hail from that source.
If they don’t, then maybe they hail from Wegner’s The Illusion of Conscious Will[21] instead. Less ambitious, far more accessible, Wegner’s book doesn’t so much deal with the nature of consciousness as it does with the nature of free will, which Wegner thumbnails as “our mind’s way of estimating what it thinks it did”. Wegner presents his own list of syndromes and maladies, all of which reinforce the mind-boggling sense of what fragile and subvertible machines we are. And of course, Oliver Saks[22] was sending us memos from the edge of consciousness long before consciousness even had a bandwagon to jump on.
It might be easier to list the people who haven’t taken a stab at “explaining” consciousness. Theories run the gamut from diffuse electrical fields to quantum puppet-shows; consciousness has been “located” in the frontoinsular cortex and the hypothalamus and a hundred dynamic cores in between.[100], [101], [102], [103], [104], [105], [106], [107], [108], [109], [110] (At least one theory[111] suggests that while great apes and adult Humans are sentient, young Human children are not. I admit to a certain fondness for this conclusion; if childen aren’t nonsentient, they’re certainly psychopathic).
But beneath the unthreatening, superficial question of what consciousness is floats the more functional question of what it’s good for. Blindsight plays with that issue at length, and I won’t reiterate points already made. Suffice to say that, at least under routine conditions, consciousness does little beyond taking memos from the vastly richer subconcious environment, rubber-stamping them, and taking the credit for itself. In fact, the nonconscious mind usually works so well on its own that it actually employs a gatekeeper in the anterious cingulate cortex to do nothing but prevent the conscious self from interfering in daily operations.[112], [113], [114] (If the rest of your brain were conscious, it would probably regard you as the pointy-haired boss from Dilbert.)
Sentience isn’t even necessary to develop a “theory of mind”. That might seem completely counterintuitive: how could you learn to recognise that other individuals are autonomous agents, with their own interests and agendas, if you weren’t even aware of your own? But there’s no contradiction, and no call for consciousness. It is entirely possible to track the intentions of others without being the slightest bit self-reflective.[107] Norretranders declared outright that “Consciousness is a fraud”.[115]
Art might be a bit of an exception. Aesthetics seem to require some level of self-awareness — in fact, the evolution of aethestics might even be what got the whole sentience ball rolling in the first place. When music is so beautiful if makes you shiver, that’s the reward circuitry in your limbic system kicking in: the same circuitry that rewards you for fucking an attractive partner or gorging on sucrose.[116] It’s a hack, in other words; your brain has learned how to get the reward without actually earning it through increased fitness.[98] It feels good, and it fulfills us, and it makes life worth living. But it also turns us inward and distracts us. Those rats back in the sixties, the ones that learned to stimulate their own pleasure centers by pressing a lever: remember them? They pressed those levers with such addictive zeal that they forgot to eat. They starved to death. I’ve no doubt they died happy, but they died. Without issue. Their fitness went to Zero.
Aesthetics. Sentience. Extinction.
And that brings us to the final question, lurking way down in the anoxic zone: the question of what consciousness costs. Compared to nonconscious processing, self-awareness is slow and expensive.[112] (The premise of a separate, faster entity lurking at the base of our brains to take over in emergencies is based on studies by, among others, Joe LeDoux of New York University[117], [118]). By way of comparison, consider the complex, lightning-fast calculations of savantes; those abilities are noncognitive,[119] and there is evidence that they owe their superfunctionality not to any overarching integration of mental processes but due to relative neurological fragmentation.[4] Even if sentient and nonsentient processes were equally efficient, the conscious awareness of visceral stimuli — by its very nature — distracts the individual from other threats and opportunities in its environment. (I was quite proud of myself for that insight. You’ll understand how peeved I was to discover that Wegner had already made a similar point back in 1994.[120]) The cost of high intelligence has even been demonstrated by experiments in which smart fruit flies lose out to dumb ones when competing for food,[121] possibly because the metabolic demands of learning and memory leave less energy for foraging. No, I haven’t forgotten that I’ve just spent a whole book arguing that intelligence and sentience are different things. But this is still a relevant experiment, because one thing both attributes do have in common is that they are metabolically expensive. (The difference is, in at least some cases intelligence is worth the price. What’s the survival value of obsessing on a sunset?)
While a number of people have pointed out the various costs and drawbacks of sentience, few if any have taken the next step and wondered out loud if the whole damn thing isn’t more trouble than it’s worth. Of course it is, people assume; otherwise natural selection would have weeded it out long ago. And they’re probably right. I hope they are. Blindsight is a thought experiment, a game of Just suppose and What if. Nothing more.
On the other hand, the dodos and the Steller sea cows could have used exactly the same argument to prove their own superiority, a thousand years ago: if we’re so unfit, why haven’t we gone extinct? Why? Because natural selection takes time, and luck plays a role. The biggest boys on the block at any given time aren’t necessarily the fittest, or the most efficient, and the game isn’t over. The game is never over; there’s no finish line this side of heat death. And so, neither can there be any winners. There are only those who haven’t yet lost.
Cunningham’s stats about self-recognition in primates: those too are real. Chimpanzees have a higher brain-to-body ratio than orangutans,[122] yet orangs consistently recognise themselves in mirrors while chimps do so only half the time.[123] Similarly, those nonhuman species with the most sophisticated language skills are a variety of birds and monkeys — not the presumably “more sentient” great apes who are our closest relatives.[81], [124] If you squint, facts like these suggest that sentience might almost be a phase, something that orangutans haven’t yet grown out of but which their more-advanced chimpanzee cousins are beginning to. (Gorillas don’t self-recognise in mirrors. Perhaps they’ve already grown out of sentience, or perhaps they never grew into it.)
Of course, Humans don’t fit this pattern. If it even is a pattern. We’re outliers: that’s one of the points I’m making.
I bet vampires would fit it, though. That’s the other one.
Finally, some very timely experimental support for this unpleasant premise came out just as Blindsight was being copy edited: it turns out that the unconscious mind is better at making complex decisions than is the conscious mind.[125] The conscious mind just can’t handle as many variables, apparently. Quoth one of the researchers: “At some point in our evolution, we started to make decisions consciously, and we’re not very good at it.”[126]
The child Siri Keeton was not unique: we’ve been treating certain severe epilepsies by radical hemispherectomy for over fifty years now.[127] Surprisingly, the removal of half a brain doesn’t seem to impact IQ or motor skills all that much (although most of hemispherectomy patients, unlike Keeton, have low IQs to begin with).[128] I’m still not entirely sure why they remove the hemisphere; why not just split the corpus callosum, if all you’re trying to do is prevent a feedback loop between halves? Do they scoop out one half to prevent alien hand syndrome — and if so, doesn’t that imply that they’re knowingly destroying a sentient personality?
The maternal-response opioids that Helen Keeton used to kickstart mother-love in her damaged son was inspired by recent work on attachment-deficit disorders in mice.[129] The iron-scavenging clouds that appear in the wake of the Firefall are based on those reported by Plane et al.[130] I trawled The Gang of Four’s linguistic jargon from a variety of sources.[81], [131], [132], [133] The multilingual speech patterns of Theseus’ crew (described but never quoted, thank God) were inspired by the musings of Graddol,[134] who suggests that science must remain conversant in multiple grammars because language leads thought, and a single “universal” scientific language would constrain the ways in which we view the world.
The antecedent of Szpindel’s and Cunningham’s extended phenotypes exists today, in the form of one Matthew Nagel.[135] The spliced prosthetics that allow them to synesthetically perceive output from their lab equipment hails from the remarkable plasticity of the brain’s sensory cortices: you can turn an auditory cortex into a visual one by simply splicing the optic nerve into the auditory pathways (if you do it early enough).[136], [137] Bates’ carboplatinum augments have their roots in the recent development of metal musculature.[138], [139] Sascha’s ironic denigration of TwenCen psychiatry hails not only from (limited) personal experience, but from a pair of papers[140], [141] that strip away the mystique from cases of so-called multiple personality disorder. (Not that there’s anything wrong with the concept; merely with its diagnosis.) The fibrodysplasia variant that kills Chelsea was based on symptoms described by Kaplan et al..[142]
And believe it or not, those screaming faces Sarasti used near the end of the book represent a very real form of statistical analysis: Chernoff Faces,[143] which are more effective than the usual graphs and statistical tables at conveying the essential characteristics of a data set.[144]