Genes, Design, and Designer

Skeletal remains, footprints, and artifacts indicate that human beings of our type have existed for hundreds of millions of years and that we did not evolve from more primitive apelike creatures. But what about biochemical and genetic evidence? Many evolutionists assert that there is strong evidence from DNA that humans arose relatively recently, most probably between one and two hundred thousand years ago in Africa. Evolutionists also claim that one can by genetics and biochemistry trace the origin of the human species all the way back to the very beginnings of life on earth. In comparison with this genetic and te ambiguous and that the conclusions based upon it are shaky.


People often get the impression that scientists, when they talk about genetic data, are reading directly from the “book of life.” But genetic data is just a series of A’s, T’s, G’s, and C’s, representing a sequence of molecules called nucleotides (adenine, thymine, guanine, and cytosine) on a DNA strand. When scientists try to turn that series of letters into statements about human origins, they use many speculative assumptions and interpretations. Anthropologist Jonathan Marks (1994, p. 61) therefore says it is a “pernicious pseudo-scientific idea that independently


. . . genetic data tell a tale.” Marks (1994, p. 61) says that genetics is one area of science in which “sloppy thought and work can often carry as much weight as careful thought and work,” and he therefore warns that “one is forced to wonder about the epistemological foundations of any specific conclusions based on genetic data.” Marks (1994, p. 59) noted that “the history of biological anthropology shows that, from the beginning of the 20th century, grossly naïve conclusions have been promoted simply on the basis that they are derived from genetics.” In light of this, the fossil evidence outlined in the previous chapter retains its importance as a useful check on genetic speculations. For the following discussion, I am indebted to the works of Stephen Meyer, William Dembski, and Michael Behe, and other members of the modern intelligent design movement.



The Beginning of life

The genetic theory of human evolution is in trouble right from the start. Technically, evolution is not about the origin of life. Instead, evolutionists study the changes in reproducing biological forms, each with a genetic system that helps determine the exact nature of the form. Changes in the genetic system result in changes in the successive generations of biological forms. But evolutionists understand that they also have to explain the origin of the first biological forms, and their genetic systems, from prebiotic chemical elements. Therefore, proposals for the natural origin of the first biological organisms have become an integral part of modern evolutionary thought.


Today, the simplest independent biological organisms are single cells, and most scientists assume that the first real living things were also single cells. Early evolutionists like Ernst Haeckel (1905, p. 111) and Thomas H. Huxley (1869, pp. 129–145) thought cells were mere blobs of protoplasm and gave relatively simple explanations for their origin. They thought chemicals like carbon dioxide, nitrogen, and oxygen would somehow spontaneously crystallize into the slimy substance of life (Haeckel 1866, pp. 179–180; 1892, pp. 411–413).


As time passed, scientists began to recognize that even simple cells are more than just blobs of protoplasm. They have a complex biochemical structure. In the twentieth century, Alexander I. Oparin, a Russian biochemist, outlined an elaborate set of chemical stages leading to the formation of the first cell. He believed that the process would take a very long time—hundreds of millions, perhaps billions of years. Oparin (1938, pp. 64–103) proposed that ammonia (a nitrogen compound), methane, hydrogen, carbon dioxide and water vapor, with ultraviolet light as an energy source, would combine with metallic elements dissolved in water. This would produce a nitrogen-rich prebiotic soup, in which simple hydrocarbon molecules would form. These would combine into amino acids, sugars, and phosphates (Oparin 1938, pp. 133–135), and these would in turn form proteins. The groups of molecules reacting together in this way would become attracted to each other and surround themselves with chemical walls, resulting in the precursors to the first cells. Oparin called them “coacervates” (Oparin 1938, pp. 148–159). These primitive cells would compete for survival, becoming more complex and stable.


Oparin’s ideas remained largely theoretical until the experiments of Stanley Miller and Henry Urey. Miller and Urey proposed, as did Oparin, that the earth’s early atmosphere was composed of methane, ammonia, hydrogen, and water vapor. They reproduced this atmosphere in a laboratory and then ran electric sparks through the mixture. The sparks represented lightning, which provided the energy needed to get the relatively stable chemical ingredients of the experiment to react with each other. The experimental apparatus included a flask of water, in which the tarlike residues of the experiment accumulated. When after a week the water was analyzed, it yielded, among other things, three amino acids in low concentrations (Miller 1953). Amino acids are the building blocks of proteins, which are necessary ingredients of living things.


Later experiments by other researchers produced all except one of the twenty biological amino acids. Still more experiments produced fatty acids and nucleotides, which are necessary for DNA and RNA. But the experiments did not produce another essential element of DNA and RNA, the sugars deoxyribose and ribose (Meyer 1998, p. 118). Nevertheless, many scientists believed that a viable cell could eventually arise from the chemical elements produced in the prebiotic soup.


However, this idea has several shortcomings. When geochemists analyze the sediments from the early history of the earth, they fail to find evidence of a nitrogen-rich prebiotic soup, of the kind predicted by Oparin. Other researchers have determined that the earth’s early atmosphere was most probably not Oparin’s mixture of water vapor and the reducing gases ammonia (a nitrogen compound), methane, and hydrogen. Instead it was a mixture of water and the neutral gases carbon dioxide and nitrogen (Walker 1977, pp. 210, 246; Kerr 1980). Some free oxygen was also included (Kerr 1980; Dimroth and Kimberley 1976). Today, scientists believe most of the oxygen in the earth’s atmosphere came from photosynthesis in plants, but even before plants arose, oxygen could have been derived from the break up of H20 molecules and from gases released into the atmosphere by volcanoes. Even small amounts of free oxygen would hamper the production of amino acids and other molecules necessary for life. The oxygen would make the required reactions more difficult, and it would also, by oxidation, break down any organic molecules that did form.


Despite these difficulties evolutionists maintain their faith that the ingredients for the bodies of the first living things could have formed spontaneously during the earth’s early history. Let us now consider in a more detailed way some of their speculative ideas about how this may have happened. The ideas fall into three main categories: chance, natural selection, and self-organization.



Chance

Some evolutionists propose that chance operating on the molecular level can account for the origin of proteins, which are formed of long chains of amino acid subunits. But there are some big obstacles to such proposals. Let us consider a simple protein composed of 100 amino acid subunits. For a protein to function properly in an organism, the bonds between the amino acids must be peptide bonds. Amino acids can bond with each other in various ways, with peptide bonds occurring half the time. So the odds of getting 100 amino acids with all peptide bonds are


1 in 1030 (10 followed by 30 zeroes). Also each amino acid molecule has a left handed L-form (from laevus, the Latin word for left) and a right handed D-form (from dexter, the Latin word for right). The two forms are mirror images of each other, like right and left shoes, or right and left gloves. In living things, all the proteins are composed of amino acid subunits of the L form. But L and D forms of amino acids occur equally in nature. To get a chain of 100 L-form amino acids, the odds again are 1 in


1030. This is equivalent to flipping a coin and getting heads one hundred times in a row. Therefore, the odds of getting a 100 amino acid chain with all peptide bonds and all L-form amino acids would be about 1 in 1060, which is practically zero odds in the available time limits.


Even if all the bonds are peptide bonds and all the amino acids are L forms, that is still not enough to give us a functional protein. It is not that any combination of amino acid subunits will give us a protein that will contribute to the function of a cell. The right amino acids must be arranged in quite specific orders (Meyer 1998, p. 126). The odds of the right 100 amino acids arranging themselves in the right order are in themselves quite high—about 1 in 1065 (the number of atoms in our galaxy is about 1065 ). Putting this more picturesquely, biochemist Michael Behe (1994, pp. 68–69) says that getting a sequence of 100 amino acids that functions as a protein is comparable to finding one marked grain of sand in the Sahara desert—three times in a row. If you put in the other factors (peptide binding, L-forms only) then the odds go up to 1 chance in 10125. So chance does not seem to work as an explanation for the chem……ical origin of life.


To avoid this conclusion, some scientists appeal to an infinite number of universes. But they have no proof that even one additional universe exists. Neither can they tell us if stable molecules form in any of these imaginary universes (stable molecules are necessary for the kind of life we observe in this universe). We shall consider this topic in greater detail in a later chapter.

Natural Selection

Some scientists, such as Oparin (1968, pp. 146–147), have proposed that natural selection could help select among amino acid chains to produce functional proteins, thus improving the odds that these proteins could form. In other words, protein formation does not rely on pure chance. But there are two problems with this. First, this prebiotic natural selection must operate on amino acid chains that were produced randomly, and we have already seen that the odds are very heavily against getting even a simple chain of amino acids with all peptide bonds and all L forms. So it would be hard to get even the basic raw materials (amino acid chains) upon which natural selection could operate. Second, natural selection involves some kind of molecular replication system. The odds that any such replication system could form by chance are even more remote than the odds against the chance formation of several kinds of amino acid chains upon which natural selection could act. The replication system itself must be made of combinations of highly specific complex protein molecules. Proposals such as Oparin’s therefore confront a major contradiction. Natural selection is supposed to produce the complex proteins, but natural selection requires a reliable molecular replication system, and all such systems known today are formed from complex and very specifically structured protein molecules. Oparin suggested that perhaps the earliest replication system did not have to be very reliable and that the system could make use of proteins that were not as specifically structured as proteins currently found in organisms. But Meyer (1998, p. 127) points out that “lack of . . . specificity produces ‘error catastrophes’ that efface the accuracy of self-replication and eventually render natural selection impossible.”


Despite these difficulties, Richard Dawkins (1986, pp. 47–49), in his book the Blind Watchmaker, still proposes that chance and natural selection (represented by a simple computer algorithm) can yield biological complexity. To demonstrate that the process is workable, he programmed a computer to generate random combinations of letters and compare them to a target sequence that forms an intelligible grammatically correct sentence. Those combinations of letters that come closest to the meaningful target sequence are preserved, whereas those that depart from the target sequence are rejected. After a certain number of runs, the computer produces the target sequence. Dawkins takes this as proof that random combinations of chemicals could by natural selection gradually produce biologically functional proteins. The reasoning is, however, faulty. First, Dawkins assumes the existence of a complex computer, which we do not find in nature. Second, he assumes the presence of a target sequence. In nature there is no target sequence of amino acids that is specified in advance, and to which random sequences of amino acids can be compared. Third, the trial sequences of letters that are selected by the computer do not themselves have any linguistically functional advantage over other sequences, other than that they are one letter closer to the target sequence. For the analogy between the computer algorithm and real life to hold, each sequence of letters chosen by the computer should itself have some meaning. In real life, an amino acid sequence leading up to a complex protein with a specific function should itself have some function. If it has no function, which can be tested for fitness by natural selection, there is nothing on which natural selection can operate. Meyer (1998, p. 128) says, “In Dawkins’s simulation, not a single functional English word appears until after the tenth iteration. . . . Yet to make distinctions on the basis of function among sequences that have no function whatsoever would seem quite impossible. Such determinations can only be made if considerations of proximity to possible future functions are allowed, but this requires foresight that molecules do not have.” In other words, Dawkins’s result can only be obtained because of the element of intelligent design embedded in the whole experiment.

Self-organization

Some scientists have suggested that something more than chance and natural selection is involved in the linking of amino acids to form proteins. They propose that certain chemical systems have self-organizing properties or tendencies. Steinman and Cole (1967) suggested that one amino acid may be attracted to another amino acid more than it is attracted to others. There is experimental evidence that this is true. There is some differential attraction among amino acids. Steinman and Cole claimed that the ordering of amino acids they observed in their experiments matched the ordering of amino acids in ten actual proteins. But when Bradley and his coworkers (Kok et al. 1988) compared the sequences reported by Steinman and Cole to a larger sample of sequences from 250 actual proteins, they found these 250 sequences “correlate much better with random statistical probabilities than with the experimentally measured dipeptide bond frequencies of Steinman and Cole” (Bradley 1998, p. 43). Also, if the properties of the twenty biological amino acids strongly determined the bonding of protein sequences we would expect only a few kinds of proteins to form, whereas we observe that thousands form (Bradley 1998, p. 43).


Another kind of self-organization happens when disordered molecules of a substance form crystals. This is technically called “spontaneous ordering near equilibrium phase changes.” The formation of crystals is fairly easy to explain. For example, when the temperature of water is lowered below the melting point, the tendency of water molecules to interact in a disordered way is overcome, and they link together in an ordered fashion. In this phase transition, the water molecules tend toward a state of equilibrium, moving to the lowest potential energy, giving up energy in the process. Imagine that there is a large depression in the middle of a billiards table. If you tilt the table here and there, the wandering balls will naturally wind up in the depression, touching each other and motionless. In the process energy is lost i.e. the process is exothermic. But the formation of complex biological molecules (biopolymers) is different. It is an endothermic process, meaning heat is added, and it takes place far from thermal equilibrium. The polymers are at a higher energy potential than their individual components. It is as if the pool table has a hump in the middle, rather than a depression. It is a lot more difficult to imagine all the balls winding up together on top of the hump simply as a result of random movement, than it is to imagine them winding up in the depression in a state of thermal equilibrium. It would take some energy to get the balls up on to the hump and keep them there. Bradley (1998, p. 42) says, “All living systems live energetically well above equilibrium and require a continuous flow of energy to stay there . . . Equilibrium is associated with death in the biosphere, making any explanationof the origin of life that is based on equilibrium thermodynamics clearly incorrect. . . . phase changes such as water freezing into ice cubes or snowflakes is irrelevant to the processes necessary to generate biological information.”


The kind of order found in crystals is repetition of simple patterns, whereas the kind of order found in living things is highly complex and nonrepetitive. The order found in the biochemical components of the bodies of living things is not only highly complex, but very specific. This specified complexity has a high information content, which allows the biochemical components to perform specific functions that contribute to the survival of the organism. Compare the letter sequences ABABAB AB, RXZPRK LDMW, and THE BIG RED HOUSE. The first sequence is ordered, but it is not complex and therefore is not informative. Crystals are like this. The second sequence is complex, but it is also not informative. But the third sequence is both complex and informative. The sequence of letters encodes information that allows the sentence to perform a specific communication function. This property can be called “specified complexity.” Biological complexity of the kind we are talking about in proteins and other molecules in cells is specified complexity— it is complexity that specifies a function (like protein coding ability of DNA). Such patterns of complexity are thus different from the simple repetitive patterns that arise in the crystallization process (Meyer 1998, p. 134).


Prigogine proposed that self-reproducing organisms could arise from reacting chemicals brought together in the convection currents of thermal baths, far from thermal equilibrium. This is somewhat different from the crystal formation process, which involves phase transitions at or near thermal equilibrium. Bradley (1998, p. 42) nevertheless concludes that although the ordered behavior of the chemicals in Prigogine’s systems is more complex than that observed when the systems are at thermal equilibrium, the order is still “more the type of order that we see in crystals, with little resemblance to the type of complexity that is seen in biopolymers.” And whatever ordering is observed can be attributed to the complex design of the experimental apparatus. Meyer (1998, p. 136), citing the work of Walton (1977), says, “even the self-organization produced in Prigogine-like convection currents does not exceed the organization or information represented by the experimental apparatus used to create the currents.”


Manfred Eigen has proposed that groups of interacting chemicals called “hypercycles” could be a step toward self-reproducing organisms (Eigen and Schuster 1977, 1978a, 1978b). But John Maynard-Smith (1979) and Freeman Dyson (1985) have exposed some flaws in this proposal. “They show, first,” says Meyer (1998, p. 136), “that Eigen’s hypercycles presuppose a large initial contribution in the form of a long RNA molecule and some forty specific proteins. More significantly, they show that because hypercycles lack an error-free mechanism of self-replication, they become susceptible to various error catastrophes that ultimately diminish, not increase, the information content of the system over time.”


Stuart Kauffman of the Sante Fe Institute has tried another approach to complexity and self-organization. He defines “life” as a closed network of catalyzed chemical reactions that reproduce each molecule in the network. No single molecule is engaged in self-replication. But he asserts that if you have a system of at least a million proteinlike molecules, the odds are that each one will catalzye the formation of another molecule in the system. Therefore the system as a whole replicates. When the system reaches a certain state, it supposedly undergoes a phase transition, introducing a new level of complexity for the whole system. But Kauffman’s concept is based purely on computer models with little relevance to real life systems of reacting chemicals (Bradley 1998, p. 44).


First of all, Kaufmann’s estimate of a million molecules is too low for each kind of molecule to catalyze the formation of another kind of molecule in the system. But even if a million kinds of molecules is enough, the odds that a particular catalyzing molecule will be near the correct chemical ingredients needed to produce another molecule are remote (Bradley 1998, p. 45).


Futhermore, Kaufmann’s computer models do not adequately take into account the exothermic nature of the formation of biopolymers— the reactions require energy from the system and would quickly deplete it, leaving the system “dead.” Kaufmann proposes that energy-producing reactions in the system could compensate for the energy consumed in the formation of biopolymers. But Bradley (1998, p. 45) points out that these reactions will also require that certain molecules be in the right places at the right times, in order to participate in the reactions. How all this is supposed to happen is not satisfactorily explained in Kauffman’s models. Bradley (1998, p. 45) adds: “Dehydration and condensation onto substrates, his other two possible solutions to the thermodynamic problems, also further complicate the logistics of allowing all of these 1,000,000 molecules to be organized into a system in which all catalysts are rightly positioned relative to reactants to provide their catalytic function.” In other words, Kauffman’s system does not realistically account for getting all the molecular elements arranged in the proper places for all the needed catalytic and energy-producing reactions to take place. In a computer this may not matter, but in real life it does.

The Rna World

The biggest problem in all origin-of-life scenarios remains explaining in a detailed way the origin of the first DNA replication system found in modern cells. Trying to explain how the DNA replication system arose directly from molecular subunits has proved so difficult that scientists have given up trying. They have concluded that there must have been simpler precursors to the DNA system. Today, many scientists are concentrating their efforts on a replication system based on RNA, which plays a subordinate role in today’s cellular reproduction processes. They imagine in the earth’s early history an “RNA world” that existed before the DNA world. RNA is a nucleic acid, and it has the ability, under certain circumstances, to replicate itself. Proteins cannot replicate themselves without the help of enzymes that catalyze the replication process. So RNA offers a possible solution to this problem. Perhaps a system of replicating RNA molecules could eventually start catalyzing the replication of proteins, the building blocks of an organism.


The main problem with the RNA world is that scientists have not given a satisfactory explanation of how RNA could spontaneously form. Gerald Joyce and Leslie Orgel, two prominent RNA researchers, have admitted that it is difficult to see how RNA could have self-organized in the earth’s early environment. The two primary subunits of RNA—nucleic acids and sugars—tend to repel each other. Joyce and Orgel (1993, p. 13) called the idea that RNA could self-organize “unrealistic in light of our current understanding of prebiotic chemistry” and spoke of “the myth of a self-replicating RNA molecule that arose de novo from a soup of random polynucleotides.” They also called attention to the primary paradox of origin-of-life theories: “Without evolution it appears unlikely that a self-replicating ribozyme [RNA] could arise, but without some form of self-replication there is no way to conduct an evolutionary search for the first, primitive self-replicating ribozyme.” It should also be kept in mind that RNA can self-replicate only under carefully controlled laboratory conditions not easily duplicated in the early history of the earth. Another problem is that there are many kinds of RNA molecules, and not all of them catalyze their own self-replication. Behe (1996, p. 172) observes: “The miracle that produced chemically intact RNA would not be enough. Since the vast majority of RNAs do not have useful catalytic properties, a second miraculous coincidence would be needed to get just the right chemically intact RNA.”


Some researchers have expanded their search for a first nucleotide molecule capable of reproducing itself without the help of enzymes beyond RNA. But thus far all such attempts have been unsuccessful. For example, Stanley Miller and others have proposed peptide nucleic acid (PNA) as an alternative to RNA as the first self-replicating molecule. According to Miller, PNA is a more stable molecule than RNA. But in his experiments Miller has only been able to produce some components of PNA and not the molecule itself (Travis 2000b). In a study published in Science, Eschenmoser (1999, p. 2118) says: “. . . it has not been demonstrated that any oligonucleotide system possesses the capacity for efficient and reliable nonenyzmatic replication under potentially natural conditions.” Eschenmoser, speaking of RNA or any other oligonucleotide molecule, said that “its chances for formation in an abiotic natural environment remain open to question.” He admitted that although most scientists think that the formation of some kind of RNA-like oligonucleotide is a key step in the formation of life, “convincing experimental evidence that such a process can in fact occur under potentially natural conditions is still lacking.”

Developmental Biology

Even if we grant the evolutionists the existence of some first simple living thing, then we have to consider how that first living thing gradually differentiated into other living things, including human beings. One source of evidence about the history of such gradual development is the fossil record. When we looked carefully into the human fossil record, we found evidence that humans have existed since the very beginnings of life. Another type of evidence can be found in developmental biology. Most animals begin life as fertilized eggs, which then become embryos, which then become infant organisms, which then become adult organisms. How this happens is the subject matter of developmental biology. Darwinists say they can find evidence for evolution in developmental biology.


Darwinists often point out that at a certain stage of its development the human embryo resembles that of a fish, and they take this as a proof of evolution. Actually, at a certain stage all vertebrate embryos resemble a fish, and thus resemble each other. Darwin himself said “the embryos of mammals, birds, fishes, and reptiles” are “closely similar.” He thought the best explanation was that the adults of these species are all “the modified descendants of some ancient progenitor.” He also proposed that “the embryonic or larval stages show us, more or less completely, the condition of the progenitor of the whole group in its adult state” (Darwin 1859, pp. 338, 345). In other words, the early fishlike state of the embryo in vertebrates resembles the original adult vertebrate from which all today’s vertebrates supposedly came—we were all once fish. But the logic is flawed by a false estimation of the similarity of the embryos.


The process by which an embryo develops into an adult is called ontogeny, and the process of evolution by which a common ancestor supposedly develops into various descendants is called phylogeny. Many Darwinists, to greater and lesser degrees, have believed that the embryonic development of any vertebrate mirrors the evolutionary process that gave rise to it. As the German Darwinist Ernst Haeckel put it: “Ontogeny recapitulates phylogeny.” To illustrate his point, Haeckel published a series of images of the embryonic development of several vertebrates, each one looking at first like a fish and then developing into its characteristic form. It was later discovered that Haeckel had doctored the images to make the early fishlike stages look more similar in his illustration than they actually were in nature. Haeckel was formally found guilty of this offense by an academic court at the University of Jena. Nevertheless, his illustration of the vertebrate embryos is still widely printed in textbooks of evolution even today.


Apart from the doctoring of the images in the classic illustration of the vertebrate embryos, there is another deception. The first images of the embryo in the illustration, the ones sharing an impressive similarity, are actually from a middle stage of embryonic development. If the illustration included the earlier stages of embryonic development, including the eggs, an entirely different impression would emerge.


The eggs, the single celled starting points of the embryos of all animals, are vastly different. The bird and reptile eggs are of very great size. Fish eggs are usually smaller, but still easily visible to human eyes. The human egg, on the other hand, is of microscopic size.


The first stage of embryonic development is cleavage, the division of the egg into cells. Each group of vertebrate animals has its own cleavage pattern, very different from the others. During the cleavage stage, the basic anterior to posterior (front to rear) direction of the body is established. Next comes the gastrula phase, during which the basic body plan of the animal is elaborated. During gastrulation, the cells begin to differentiate into the various tissues. As in the case of cleavage patterns, gastrulation patterns display a great deal of variation among the different kinds of animals. At this stage in development, the embryos therefore look quite different from each other (Nelson 1998, p. 154; Wells 1998, p. 59; Elinson 1987).


It is only in the next stage of embryonic development, the pharyngula stage, that the embryos of fish, reptiles, birds, and mammals come to temporarily resemble each other, looking somewhat like little fishes. In the pharyngula stage, all the embryos have little folds of tissue in the throat region that look like gills. In fish, they do become gills, but in other animals they form the inner ear and thyroid glands. So the embryos of humans and other mammals never have gills, nor do the embryos of birds and reptiles (Wells 1998, p. 59). After pharnygula stage, the embryos again diverge in appearance.


Considered in its entirety, the embryonic development of the vertebrates, rather than supporting evolution, tends to pose a strong challenge to it. According to evolutionists’ theory, all metazoans (multicelled creatures) must have come from a common ancestor. This creature would have had a certain body plan. To change that basic body plan would require changes in the genes that control the early embryonic stages of that body plan’s development. But according to evolutionary theory, the genes controlling the early stages of development should not be subject to very much change. Any such changes could cause massive disruptions in the development of the organism, causing its death or serious malformation.


That is what we see today. As Nelson (1998, p. 159) says, “All experimental evidence suggests that development, when perturbed, either shuts down, or returns via alternate and redundant pathways to its primary trajectory.” Therefore, according to most evolutionary biologists, positive mutations should occur only in genes responsible for details of later phases of development of an organism.


According to evolutionary theory, we should expect the earliest phases of development in living things to be quite similar. But, as we have seen, the early developmental stages of living things are vastly different from each other (Nelson 1998, p. 154). For example, after the egg begins to divide, there are several pathways by which the embryos of different animals reach the gastrula stage. Eric Davidson (1991, p. 1), a developmental biologist, has called this variety of cleavage patterns “intellectually disturbing.” It is somewhat of a mystery how all these very different patterns of early development came from some common ancestor. Richard Elinson (1987, p. 3) asked: “If early embryogenesis is conservative, how did such major changes in the earliest events of embryogenesis occur?” He calls it “a conundrum.”


Some (Thomson 1988, pp. 121–122) have proposed that early changes in development are obviously possible, simply because they have obviously occurred. This is a typical example of blind faith in evolutionary doctrine. Nelson (1998, p. 158) says: “Note that this position rests entirely on the assumption of common descent. There is little if any experimental evidence that ‘changes in early development are possible.’ I know of only a single example of heritable changes in metazoan cleavage patterns.” In other words, there is only a single experimentally verified example of a genetic change in the early development of an animal that has been passed on to its descendants. The change involves a mutation in the early development of the snail Limenaea peregra, which causes only the direction of the coiling of its shell to switch from right to left (Nelson


1998, p. 170, citing Gilbert 1991, p. 86) This is not a very significant change. It represents no new biological feature.


So today there is practically no experimental evidence that early changes in development can result in viable organisms with new features. Some scientists propose that although such changes are not possible in today’s organisms, they were possible early in the history of evolution, resulting in major changes in body plans. Foote and Gould (1992, p. 1816) suggest that this proposed early period of developmental flexibility was closed off hundreds of millions of years ago at the end of the “Cambrian explosion,” during which all major body plans now seen in living things supposedly emerged. After the Cambrian explosion there was “some form of genetic and developmental locking.” The proof of this, say Foote and Gould, is that no new major body plans have emerged since the Cambrian. Further, they say that we do not see today that creatures with major mutations in genes that control early development survive (Foote and Gould 1992, p. 1816). But this era of early plasticity of body plans, generated by changes in early developmental stages of the embryo, is purely speculative. Scientists cannot point to any specific reason, on the biomolecular level, exactly why Cambrian creatures could survive such major mutations.


Nelson (1998, p. 168) says: “Golden ages of evolution are postulated (e.g., the Cambrian explosion), in the complete absence of any mechanistic understanding, to accommodate the demands of a philosophy of nature that holds, in the face of abundant disconfirming evidence, that complex things come into existence by undirected mutation and selection from simpler things. Yet, however unlikely they may be, these golden ages of macroevolution are preferable by neo-Darwinists to taking at face value the demonstrable limits of organismal structure and function—for those limits imply the primary discontinuity of organisms one from another.” Discontinuity implies intelligent design of the separate species.


Scientists find it difficult to explain in any detailed way how these body plans (or Bauplans) came about from some common ancestor by evolutionary processes. Bruce Wallace (1984, cited in Nelson 1998, p. 160) tells of some of the problems involved in modifying a body plan: “The Bauplan of an organism . . . can be thought of as the arrangement of genetic switches that control the course of the embryonic and subsequent development of the individual; such control must operate properly both in time generally and sequentially in the separately differentiated tissues. Selection, both natural and artificial, that leads to morphological change and other developmental modification does so by altering the settings and triggerings of these switches . . . The extreme difficulty encountered when attempting to transform one organism into another but still functional one lies in the difficulty in resetting a number of the many controlling switches in a manner that still allows for the individual’s orderly (somatic) development.” It is like trying to transform a six cylinder engine into an eight cylinder engine while keeping the engine running through all the changes. Arthur (1987, cited in Nelson 1998, p. 170) says that “in the end we have to admit that we do not really know how body plans originate.”


What to speak of understanding how genes can govern major changes in body plans, to produce new organisms, scientists do not yet fully understand how genes direct the development of the body plan of any par-ticular species. R. Raff and T. Kaufman (1991, p. 336) speak of science’s “currently poor understanding of the way in which genes direct the morphogenesis of even simple metazoan structures.” Each human being starts as a single cell—a fertilized egg. The egg begins to divide into more cells. Each cell contains the exact same DNA, but the cells differentiate into various tissues and structures. How exactly this happens is not currently understood, even in very small multicellular organisms.


Some scientists believe that “homeotic” genes provide the answer to the specification of body plans and their development in an organism. In the late nineteenth century biologists noted that body parts of some animals sometimes grew to resemble other body parts. For example, in insects, an antenna might come to display the form of a leg (a condition called Antennapedia). Such forms were called homeotic. The prefix homeo means “like, or similar,” so a homeotic leg would be a body part that resembles a leg. In the twentieth century, the gene responsible for the mutation that causes Antennapedia in fruit flies was discovered and named antp. But the big question is not how a leg can grow in place of an antenna, but how such complex structures as legs and antennas came into existence in the first place—something not perfectly explained up to now by genetic researchers and developmental biologists.


Besides antp, there are other homeotic genes in the fruit fly, such as Pax-6, related to eye development. In 1995, Walter Gehring and his colleagues mutated Pax-6, causing eyes to grow on the antenna and legs of fruit flies. Pax-6 is similar in flies and mammals (humans included). Part of the gene (the DNA binding segment) is also found in worms and squids (Quiring et al. 1994). Researchers concluded that Pax-6 was “the master control gene for eye morphogenesis” and that it is universal in multicellular animals (Halder et al. 1995, p. 1792).


But Wells (1998, pp. 56–57) points out: “If the same gene can ‘determine’ structures as radically different as . . . an insect’s eyes and the eyes of humans and squids then that gene is not determining much of anything.” He adds: “Except for telling us how an embryo directs its cells into one of several built-in developmental pathways, homeotic genes tell us nothing about how biological structures are formed.”


In the case of the eye, evolutionists have to explain how this complicated biological structure arose not just once, but several times. Prominent evolutionists L. von Salvini-Palwen and Ernst Mayr (1977) say that “the earliest invertebrates, or at least those that gave rise to the more advanced phyletic lines, had no photoreceptors” and that “photoreceptors have originated independently in at least 40, but possibly up to 65 or more different phyletic lines.”



The Biological Complexity of Humans

The great complexity of the organs found in the human body defies evolutionary explanation. Darwinists have not explained in any detailed way how these organs could have arisen by random genetic variations and natural selection.

The eye

The human eye is one such organ of apparently irreducible complexity. The pupil allows light into the eye, and the lens focuses the light on the retina. The eye also has features to correct for interference between light waves of different frequencies. It is hard to see how the eye could function without all of its parts being present. Even Darwin understood that the eye and other complex structures posed a problem for his theory of evolution, which required that such structures arise over many generations, step by step. Darwin didn’t give a detailed account of how this happened, but pointed to different living creatures with different kinds of eyes—some just light sensitive spots, some simple depressions with simple lenses, and others more complex. He suggested that the human eye could have arisen in stages like this. He ignored the question of how the first light sensitive spot came into being. “How a nerve comes to be sensitive to light hardly concerns us more than how life itself originated” (Darwin 1872, p. 151; Behe 1996, pp. 16–18).


Darwin’s vague account of a light-sensitive spot gradually developing into the complex, cameralike human eye may have a certain superficial plausibility, but it does not constitute a scientific explanation of the eye’s origin. It is simply an invitation to imagine that evolution actually took place. If one wishes to turn imagination into science, one must take into account the structure of the eye on the biomolecular level.


Devlin (1992, pp. 938–954) gives a fairly detailed biochemical description of the human vision process. Biochemist Michael Behe (1996, pp. 18–21) summarizes Devlin’s explanation like this: “When light first strikes the retina a photon interacts with a molecule called 11-cis-retinal, which rearranges within picoseconds to trans-retinal. . . . The change in the shape of the retinal molecule forces a change in the shape of the protein, rhodopsin, to which the retinal is tightly bound. . . . Now called metarhodopsin II, the protein sticks to another protein, called transducin. Before bumping into metarhodopsin II, transducin had tightly bound a small molecule called GDP. But when transducin interacts with metarhodopsin II, the GDP falls off, and a molecule called GTP binds to transducin. . . . GTP-transducin-metarhodopsin II now binds to a protein called phosphodiesterase, located in the inner membrane of the cell. When attached to metarhodopsin II and its entourage, the phosphodiesterase acquires the chemical ability to ‘cut’ a molecule called cGMP . . . Initially there are a lot of cGMP molecules in the cell, but the phosphodiesterase lowers its concentration, just as a pulled plug lowers the water level in a bathtub. Another membrane protein that binds cGMP is called an ion channel. It acts as a gateway that regulates the number of sodium ions in the cell, while a separate protein actively pumps them out again. The dual action of the ion channel and pump keeps the level of sodium ions in the cell within a narrow range. When the amount of cGMP is reduced because of cleavage by the phosphodiesterase, the ion channel closes, causing the cellular concentration of positively charged sodium ions to be reduced. This causes an imbalance of charge across the cell membrane that, finally, causes a current to be tranmitted down the optic nerve to the brain. The result, when interpreted by the brain, is vision.”


Another equally complex set of reactions restores the original chemical elements that started the process, like 11-cis-retinal, cGMP, and sodium ions (Behe 1996, p. 21). And this is just part of the biochemistry underlying the process of vision. Behe (1996, p. 22) stated: “Ultimately . . . this is the level of explanation for which biological science must aim. In order to truly understand a function, one must understand in detail every relevant step in the process. The relevant steps in biological processes occur ultimately at the molecular level, so a satisfactory explanation of a biological phenomenon—such as sight, digestion, or immunity—must include its molecular explanation.” Evolutionists have not produced such an explanation.

The vesicular transport System

The lysosome is a compartment within the cell that disposes of damaged proteins. There are enzymes within the lysosome that dismantle the proteins. These enzymes are manufactured in ribosomes, compartments found inside another cellular compartment called the endoplasmic reticulum. As the enzymes are being manufactured in the ribosomes, they are tagged with special amino acid sequences that allow them to pass through the walls of the ribosomes into the endoplasmic reticulum. From there, they are tagged with other amino acid sequences that allow them to pass out of the endoplasmic reticulum. The enzymes make their way to the lysosome, where they bind to the surface of the lysosome. Then yet another set of signal tags allow them to enter the lysosome, where they can do their work (Behe 1998, pp. 181–182; Alberts et al. 1994, pp. 551–650). This transportation network is called the vesicular transport system.


72 Human Devolution: a vedic alternative to Darwin’s theory


In I-cell disease, a flaw in signal tagging disrupts the vesicular transport system. Instead of carrying the protein-degrading enzymes from the ribosomes to the lysosomes, the system carries them to the cell wall, where they are dumped outside of the cell. Meanwhile, damaged proteins flow into the lysosomes, where they are not degraded. Without the proteindegrading enzymes, the lysosomes fill up like overflowing garbage cans. To deal with this, the cell manufactures new lysosomes, which also fill up with garbage proteins. Finally, when there are too many lysosomes filled with garbage proteins, the whole cell breaks down and the person with this disease dies. This shows what happens when one part of a complex system is missing—the whole system breaks down. All the parts of the vesicular transport system have to be in place for it to work effectively.


Behe (1996, pp. 115–116) says: “Vesicular transport is a mind-boggling process, no less complex than the completely automated delivery of vaccine from a storage area to a clinic a thousand miles away. Defects in vesicular transport can have the same deadly consequences as the failure to deliver a needed vaccine to a disease-racked city. An analysis shows that vesicular transport is irreducibly complex, and so its development staunchly resists gradualistic explanations, as Darwinian evolution would have it. A search of the professional biochemical literature shows that no one has ever proposed a detailed route by which such a system could have come to be. In the face of the enormous complexity of vesicular transport, Darwinian theory is mute.”

The Blood Clotting mechanism

The human blood clotting mechanism is another puzzle for evolutionists. Behe (1996, p. 78) says: “Blood clotting is a very complex, intricately woven system consisting of scores of interdependent protein parts. The absence of, or significant defects in, any one of a number of the components causes the system to fail: blood does not clot at the proper time or at the proper place.” The system is thus one of irreducible complexity, not easily explained in terms of Darwinian evolution.


The blood clotting mechanism centers around fibrinogen, a blood protein that forms the fibers that make up the clots. Normally, fibrinogen is dissolved in the blood plasma. When bleeding begins, a protein called thrombin cuts fibrinogen to make strings of a protein called fibrin. The fibrin filaments stick together, forming a network that catches blood cells, thus stopping the flow of blood from a wound (Behe 1996, p. 80). At first, the network is not very strong. It sometimes breaks, allowing the blood to flow out from the wound again. To prevent this, a protein called the fibrin stabilizing factor (FSF), creates cross links between fibrin filaments, strengthening the network (Behe 1996, p. 88).


Meanwhile, thrombin is cutting more fibrinogen into more fibrin, which forms more clots. At a certain point, the thrombin has to stop cutting fibrinogen or else so much fibrin would be produced that it would clot up the whole blood system and the person would die (Behe 1996, p.81).


There is a complex cascade of proteins and enzymes involved in turning the blood clotting system on and off at the proper times. Thrombin originally exists as an inactive form, prothrombin. In this form, it doesn’t cut fibrinogen into the fibrin filaments that make clots. So for the clotting process to start, prothrombin must be converted to thrombin. Otherwise, a person bleeds to death. And once the proper clotting is formed, thrombin has to be turned back into prothrombin. Otherwise, the clotting continues until all the blood stops flowing (Behe 1996, p. 82).


A protein called the Stuart factor is involved in the activation of prothrombin, turning it into thrombin, so that the clotting process can start. So what activates the inactive Stuart factor? There are two cascades of interactions, which begin with transformations at the wound site. Let’s consider just one of them. Behe (1996, p. 84) says: “When an animal is cut, a protein called Hageman factor is then cleaved by a protein called HMK to yield activated Hageman factor. Immediately the activated Hageman factor converts another protein, called prekallikrein, to its active form, kallikrein. Kallikrein helps HMK speed up the conversion of more Hageman factor to its active form. Activated Hageman factor and HMK then together transform another protein, called PTA, to its active form. Activated PTA in turn, together with the activated form of another protein called convertin, switch a protein called Christmas factor to its active form. Finally, activated Christmas factor, together with antihemophilic factor . . . changes Stuart to its active form.” The second cascade is equally complicated, and in some places merges with the first.


So now we have the activated Stuart factor. But even that is not enough to start the clotting process. Before the Stuart factor can act on prothrombin, prothrombin has to be modified by having ten of its amino acid subunits changed. After these changes, prothrombin can stick to a cell wall. Only when the prothrombin is adhering to a cell wall can it be converted (by the Stuart factor) into thrombin, which initiates clotting. The sticking of the prothrombin to the cell wall near a cut helps localize the clotting action in the exact region of the cut. But activated Stuart factor protein turns prothrombin into thrombin at a very slow rate. The organism would die before enough thrombin is produced to start any effective clotting. So another protein, called accelerin, must be present to increase the speed of the Stuart factor protein’s action on prothrombin (Behe 1996, pp. 81–83).


So now the prothrombin is converted into thrombin. The thrombin cuts fibrinogen, forming fibrin, which actually forms clots. Now we can turn to the question of how to stop this clotting process once it starts. Runaway clotting would clog up the organism’s blood vessels, with life threatening results. After thrombin molecules have formed, a protein called antithrombin binds to them, thus inactivating them. But antithrombin binds only when in contact with another protein called heparin, which is found in uninjured blood vessels. So this means that the antithrombin binds to the activated thrombin molecules only when they enter undamaged blood vessels, thus inactivating them and stopping the clotting. In an injured blood vessel the clotting can continue. In this way, the clotting goes on only at the site of the wound, and not in other uninjured blood vessels. Once the injured vessel is repaired the clotting will also stop there. This is accomplished by a process just as complex as the one that stops blood from clotting in uninjured blood vessels (Behe 1996 pp.87–88).


After some time, when the wound has healed, the clot itself must be removed. A protein called plasmin cuts the fibrin network that makes up the clot. As one might guess, plasmin first exists in its unactive form, plasminogen, and must be activated at the proper time to remove the clot. Its activation, of course, involves complex interactions with other proteins (Behe 1996, p. 88).


Behe (1996, p. 86) says, “The blood-clotting system fits the definition of irreducible complexity. That is, it is a single system composed of several interacting parts that contribute to the basic function, and where the removal of any one of the parts causes the system effectively to cease functioning . . . In the absence of any one of the components, blood does not clot, and the system fails.” Evolutionists have not offered any satisfactory explanation for how this complex chemical repair system, involving many unique proteins with very specific functions, came into existence.


Blood-clotting expert Russell Doolittle simply asserts that the required proteins in the system were produced by gene duplication and gene shuffling. But gene duplication just produces a duplicate of an already existing gene. Doolittle does not specify what mutations have to take place in this duplicated gene to give the protein it produces a new function useful in some evolving blood clotting system. Gene shuffling is based on the idea that each gene is made of several subsections. Sometimes in the course of reproduction the sections of genes break apart and combine back together in a new order. The reshuffled gene would produce a different protein. But the odds against getting the right subsections of genes to come together to form a new gene that would produce a protein useful in the blood-clotting cascade are astronomically high. One protein in the system, TPA, has four parts. Let us assume an animal existed at a time when the blood clotting system was just starting to form, and there was no TPA. Let us further assume that this animal had 10,000 genes. Each gene is divided into an average of three subsections. So this means 30,000 gene pieces are available for gene shuffling. The odds of getting the four parts that make up TPA to come together randomly are thus one in 30,0004—not very likely. But the main problem is getting all the parts together into a working system. Only such a system, which contributes to the fitness of the organism, can be acted on by natural selection. Isolated parts of a system do not really contribute to fitness, and therefore there is no natural selection possible. So, in order to explain the presence of today’s human blood clotting system, evolutionists first have to show the existence of a simple blood clotting system and show step by step how changes in the genes could produce more and more effective systems that work and contribute to the fitness of an organism. That has not been done in any detailed way (Behe 1996, pp. 90–97). To escape this criticism, some scientists suggest that the parts of such a complex system could have had other functions in other systems before coming together in the system in question. But that further complicates an already complicated question. In this case, scientists would then have to show how these other systems with different functions arose in step by step fashion and how parts of these systems were co-opted for another purpose, without damaging those systems.

The Dna Replication System

When a cell divides, the DNA in the cell also has to divide and replicate itself. The DNA replication system in humans and other organisms is another system that is difficult to explain by evolutionary processes. DNA is a nucleic acid. It is composed of nucleotides. Each nucleotide is composed of two parts. The first is a carbohydrate ring (deoxyribose), and the second is a base attached to the carbohydrate ring. There are four bases: adenine (A), cytosine (C), guanine (G), and thymine (T). One base binds to each carbohydrate ring. The carbohydrate rings join to each other in a chain. At one end of the chain is a 5’OH (five prime hydroxyl) group. At the other end of the DNA chain is a 3’OH (three prime hydryoxyl) group. The sequence of base pairs in a strand of DNA is read from the 5 prime end to the 3 prime end of the strand. In cells, two strands of DNA are twisted together in a helix. The bases in the nucleotides of each strand join to each other. A always bonds with T, and G always bonds with C. The two strands are thus complementary. One of them can replicate the other. If you know the base sequence of one strand of DNA you know the base sequence of the second strand in the helix. For example, if part of the sequence of bases in one strand is TTGAC, then you know that the same part of the second strand must have the bases AACTG. So each of the two strands can serve as a template for producing the other. The end result is two new double strands of DNA, matching the parent double strand. Therefore, when a cell divides into two cells, each one winds up with a matching double strand of DNA (Behe 1998, p. 184).


For DNA to replicate, the two coiled strands of DNA have to be separated. But the two complementary strands of DNA in the parent cell are joined by a chemical bond. The replication occurs at places on the DNA strand called “origins of replication.” A protein binds to the DNA at one of these places and pushes the strands apart. Then another protein called helicase moves in, and taking advantage of the opening starts pushing down the strand (like a snowplow). But once the two DNA strands are pushed apart, they want to rejoin, or if they don’t rejoin, each single strand can become tangled as hydrogen bonding takes place between its different parts. To solve this problem, there is SSB, the singlestrand binding protein, which coats the single strand, preventing it from tangling or rebonding with the other DNA strand. Then there is another problem. As the helicase moves forward, separating the two strands of coiled DNA, the two strands of DNA in front of the advancing helicase become knotted. To remove the knots, an enzyme called gyrase cuts, untangles, and rejoins the DNA strands (Behe 1998, p. 190).


The actual replication of a DNA strand is carried out principally by the polymerase enzyme, which binds itself to the DNA strand. The polymerase is attached to the original DNA strands by a ring of “clamp proteins.” There is a complex system of proteins that loads the ring onto the DNA strand. A special kind of RNA starts the replication process by linking a few nucleotide bases together forming a short chain of DNA. The polymerase then continues adding complementary nucleotide bases to the


3 prime end of the new chain. For example, if on the original DNA strand there is a G base the polymerase adds a complementary C base to the new strand. The adding of nucleotide bases takes place at the “replication forks,” the places where the two original DNA strands are pushed apart (Behe 1998, p. 188).


As a replication fork moves along one strand from the 5 prime end to the 3 prime end, the polymerase enzyme replicates this strand, called the leading strand, continuously. DNA can be replicated only in this direction, toward the 3 prime end. But the two DNA strands that make up a DNA double helix face in opposite directions. So how is the second strand replicated? While the polymerase enzyme is replicating the leading strand in the continuous manner just described, moving always toward the leading strand’s 3 prime end, it simultaneously replicates the second, or lagging strand, in a discontinuous manner, adding groups of nucleotides to its new complement in the opposite direction. The process starts with a short segment of RNA, which serves as a primer. A few nucleotides are then added to this piece of RNA, going backwards towards the 3 prime end of the lagging strand. After adding these few nucleotides going backwards, the polymerase replication machinery is unclamped and moves forward and is reclamped at the new position of the replication fork, which is continually moving toward the 3 prime end of the leading strand and away from the 3 prime end of the lagging strand. The polymerase continues replicating the leading strand by adding more bases to its new complementary strand going forward and at the same time continues replicating the lagging strand by adding to its new complementary strand another set of bases going backwards. To the lagging strand’s new complement, the polymerase adds another piece of RNA primer and a few more nucleotides going backward until they touch the previous set of RNA primer and nucleotides. Each set of nucleotides replicated on the lagging strand’s complement is called an Okazaki fragment. To join the new Okazaki fragment to the previous one, a special enzyme has to come in and remove the RNA primer between the two fragments. Then the two Okazaki fragments have to be joined by an enzyme called DNA ligase. Then the polymerase replication machinery has to be unclamped, moved forward to the replication fork, and clamped again. The process proceeds until both the leading and lagging strands have replicated completely (Behe 1998, p. 191). There is also an elaborate proofreading system that corrects any mistakes in the replication process.


Behe (1998, p. 192) notes: “No one has ever published a paper in the professional science literature that explains in a detailed fashion how DNA replication in toto or any of its parts might have been produced in a Darwinian, step-by-step fashion.” The same is true of thousands of other complex biomolecular structures and processes found in humans and other living things.

Neural Connections in the Brain

J. Travis (2000c) says, “The developing human brain . . . must make sure that its billions of nerve cells correctly establish trillions of connec-


78 Human Devolution: a vedic alternative to Darwin’s theory


tions among themselves.” Since scientists say that all conscious functions are products of brain activity, these connections assume a lot of importance. Aside from some vague speculations about “guidance molecules,” and an all abiding faith that it must have happened by evolution, scientists have offered no detailed explanation of how the connections are made. On the basis of experiments with fruit flies, scientists say they have discovered a gene that looks like it codes for 38,000 different “guidance molecules.” Even if true, this creates a huge problem for evolutionists. How could one gene be responsible for so many guidance molecules? How are those 38,000 different “guidance molecules” distributed in the proper way to make the required connections among the nerve cells in the fly brain? And even assuming one could figure this out, then how would one go from there to another more complicated brain, simply by random mutations in DNA and natural selection?

The Placenta

Another problem for evolutionists is the origin of the placenta in mammals. The DNA of a fetus is a combination of DNA from both the mother and father. It is therefore different from that of the mother. The immune system of the mother should normally reject the fetus as foreign tissue. The placenta isolates the fetus from direct contact with the mother’s immune system. The placenta also supplies the fetus with nutrients and expels wastes from the fetus. Harvey J. Kliman, a reproductive biologist at Yale University, says, “In many ways, the placenta is the SCUBA system for the fetus, while at the same time being the Houston Control Center guiding the mother through pregnancy.” According to evolutionists, before the placental mammals came into existence, all land animals reproduced by laying eggs. In a report in Science news, John Travis (2000d, p. 318) says, “As with many evolutionary adaptations, the origins of the placenta remain shrouded in mystery. That hasn’t kept biologists from speculating, however.” But speculations are not real scientific explanations. And the real scientific explanations just are not there.


“In the past ten years,” says Behe (1998, p. 183), “Journal of molecular evolution has published more than a thousand papers. . . . There were zero papers discussing detailed models for intermediates in the development of complex biomolecular structures. This is not a peculiarity of Jme. No papers are to be found that discuss detailed models for intermediates in the development of complex biomolecular structures, whether in the Proceedings of the national academy of Science, nature, Science, the Journal of molecular Biology or, to my knowledge, any science journal.”

Similarity of apes and Humans

Physical anthropologists and other scientists have tried to use genetics to clarify the supposed evolutionary relationships between humans, chimpanzees, and gorillas. Are humans closer to chimps or gorillas? Are chimps and gorillas closer to each other than either of them is to humans? Different kinds of studies yield different results. According to Marks (1994), some researchers say chromosome structure links humans and gorillas, while others say it links humans and chimps, while yet others say it links chimps and gorillas. Mitochondrial DNA evidence show that humans, chimps, and gorillas are equally close to each other. Evidence for nuclear DNA is “discordant,” with the X chromosome evidence making chimps closest to gorillas and the Y chromosome evidence making chimps closest to humans. As far as skeletal evidence is concerned, the cranium links humans and chimps, but the rest of the skeleton links chimps and gorillas (Marks 1994, pp. 65–66).


In sorting out this confusing and contradictory set of conclusions, many scientists act on the belief that genetic evidence is superior to other kinds of evidence. But Marks (1994, p. 65) questions this belief: “Molecular studies bearing on problems of anthropological systematics, it seems, have often suffered from [poor] quality control, rash generalizations, belligerent conclusions, and the gratuitous assumption that if two bodies of work yield different conclusions, the genetic work is more trustworthy.”


Sibley and Ahlquist (1984, p. 11) claimed to have used molecular methods (DNA hybridization) to reconstruct the phylogeny of chimps, gorillas, and humans. They said the genetic evidence showed that first chimps diverged from gorillas, and then humans diverged from chimps. But Marks (1994, p. 65) pointed out: “The conclusion here was derived by 1) moving correlated points into a regression line and recalculating their values; 2) substituting controls across experiments; and 3) making precise alterations on the basis of a variable that was not actually measured.” To put it more plainly, the study by Sibley and Ahlquist was flawed by artificial manipulation of the experimental data. Marks (1994, p. 66) noted: “That these manipulations are not part of the general canon of scientific protocols, however, is not complemented by the fact that they were not mentioned in the original reports, and were discovered serendipitously by others. . . . These revelations stood to make the researchers themselves look less than honest and to make public advocates of the work look less than wise.”


The study of Sibley and Ahlquist was flawed not only by these technical lapses, but also by the incorrectness of the study’s fundamental


80 Human Devolution: a vedic alternative to Darwin’s theory


assumptions. According to Marks (1994, p. 69), these assumptions were (1) that humans came from either chimps or gorillas by a two-step process (i.e. chimps from gorillas, then humans from chimps; or gorillas from chimps, and then humans from gorillas) and (2) that this process is “discernible with genetic data and theory as they currently exist.” Marks (1994, p. 69) explained, “These assumptions are pernicious because . . . they misrepresent the literature. In the first place, it must be appreciated that we do not know there were in fact two sequential divergences, and not a single trifurcation.” That is to say, it is quite possible that humans, chimps, and gorillas all came from an unknown common ancestor. The evidence might even be seen as consistent with creation of all three by God in nearly their present forms.


Evolutionists have for many years said that the DNA of humans and chimps is 97% identical. They have claimed that this proves an evolutionary connection between the two species. There are several things wrong with this kind of reasoning. First, of all, the claimed 97% identity was derived from crude DNA hybridization techniques (Sibley and Alhquist 1987). Researchers broke human DNA into little parts in test tubes and then observed how much of it recombined with pieces of chimp DNA. Three percent did not recombine. But no one really knows how similar humans and chimps really are on the actual genetic level. The human genome has only recently been sequenced. This sequencing merely gives the order of the roughly 3 billion nucleotide bases in the DNA molecules that make up the human genome. It is like having the sequence of letters that makes up a book in a foreign language. To read the book, you have to break the sequence of letters into words and sentences and understand their meaning. This has not happened yet with DNA. According to current understanding, ninety-seven percent of the bases in the human genome do not make up genes. They are called junk DNA. Sorting out the sequences that represent actual genes instead of junk DNA could take decades. The chimp genome has not even been sequenced, and it is not likely to be sequenced for years to come. So at the present moment there is no real basis for making any truly scientific comparison between the human genome and the chimp genome. We cannot at this point say, “Here are all the chimp genes, and here all are the human genes,” and talk about how similar or different they really are in total.


We should also keep in mind that genes only specify what amino acids should be strung together to form protein molecules (or other polypeptides). In other words, the genes simply generate the molecular raw materials for the construction of bodies and body functions. It should not be surprising that the bodies of humans and chimpanzees are composed of roughly the same molecular ingredients. We exist in the same kinds of environments, and eat basically the same kinds of foods. So the similarity of genes and molecular ingredients does not rule out design. Designers of different kinds of automobiles make use of basically the same ingredients. In fact, the real problem is not the ingredients—the real problem is the arrangement of those ingredients into complex forms that work together to form a functioning machine. At a factory, the raw materials may arrive, in the form of steel, glass, rubber, plastic, etc. But the factory workers also need to shape and arrange those raw materials into an automobile. Similarly, genes may specify the formation of molecular raw materials, but it has not been shown that the genes specify exactly how those molecular raw materials are organized into the bodies of chimps or humans. Unless this can be shown, in some exact way, it is not unreasonable to attribute the similarity of chimp and human DNA, as well as the complex bodily forms of chimps and humans, to intelligent design.


The most recent research, as of the time of this writing, suggests that the human and chimpanzee genomes differ by as little as 1.5 percent (Travis 2000a). “What does that number mean? No one can say at the moment,” writes John Travis in Science news (2000a, p. 236). With so little difference, it is hard to explain many things—such as why the human brain is twice the size of the chimpanzee brain (Travis 2000a, p. 237). So the similarity of human and chimpanzee DNA is actually seen by many evolutionists as a significant problem that needs to be explained. Frans de Waal, a primatologist at Emory University, says, “Most of us find it hard to believe we differ by only 1.5 percent from an ape. It’s absolutely critical that we know what that 1.5 percent is doing” (Travis 2000a, p.


237). It appears that something more than DNA is necessary to put together the complex structures that define different species. That “something” more is arguably intelligent design.


Some scientists point out that the human chromosome 2 appears to be a combination of the chimpanzee chromosomes 12 and 13. They take this as evidence for evolution. But the fact that chromosomes may have been combined does not tell us how they were combined. It may have been part of an intelligently designed system for producing different bodily forms by systematic manipulation of the chromosomes. Other scientists point to the existence of “pseudogenes” as evidence for evolution. Pseudogenes are stretches of DNA that appear like genes, but do not function as genes. For example, the human DNA has a stretch of DNA that appears like a gene that in other animals produces vitamin C. But in humans it is not active. But the fact that a gene may have been deactivated does not tell us how it was deactivated. It could have been by the action of an intelligent designer.

African eve

Some scientists claim that genetic evidence shows all living humans can trace their ancestry to a female who lived in Africa about


200,000 years ago. Her descendants then spread throughout the world, replacing whatever hominids existed there, without interbreeding with them. The hominids they replaced would have been Neandertals or Neandertal-like descendants of Homo erectus, who supposedly left Africa in a previous wave of emigration between one and two million years ago.

Evidence from mitochondrial Dna

The above scenario is called the African Eve hypothesis, or the out-of-Africa replacement hypothesis. It was first announced in the


1980s by researchers such as Cann, Stoneking, and Vigilant, among others. Their conclusions were based on studies of mitochondrial DNA. Most of the DNA in human cells is found in the nucleus. This nuclear DNA is a combination of DNA from the mother and father. The sex cells of males and females contain half the DNA found in each parent. Thus when the father’s sperm combines with the mother’s egg, the fertilized egg of the offspring contains a full complement of DNA, different from that of either the father or the mother, in the nucleus. But the mother’s egg cell also contains small round compartments (outside the nucleus) called mitochondria, which are involved in the cellular energy production process.


The presence of mitochondria in eukaryotic cells is a bit of a mystery. In eukaryotic cells, the DNA is found on chromosomes isolated in the cell’s nucleus. In prokaryotic cells, there is no nucleus and the DNA molecules simply float in the cell’s cytoplasm. Almost all of the plants and animals living today are either single eukaryotic cells or are composed of many eukaryotic cells. Only bacteria and blue-green algae are prokaryotic. Evolutionists theorize that the mitochondria in today’s cells are remnants of prokaryotic cells that invaded primitive eukaryotic cells. If that were true, then this most probably happened very early in the evolutionary process, when only single celled creatures existed. This implies that the mitochondria in all living things should be quite similar. But the mitochondrial DNA in mammals “cannot generally be classified as either prokaryote-like or eukaryote-like.” Furthermore: “The mammalian mt [mitochondrial] genetic code is different from the so-called universal genetic code . . . mammalian mitochondria are very different from other mitochondria. In yeast mitochondria, for example, not only is there a slightly different genetic code, but also the genes are widely spaced and in a different order, and in some cases they contain intervening sequences. These radical differences make it difficult to draw conclusions regarding mitochondrial evolution” (Anderson et al. 1981, p. 464). In other words, the presence of the various kinds of mitochondria in different creatures argues against an evolutionary origin.


But let us now return to the main point. In mammals, the mitochondria in the mother’s egg have their own DNA. This mitochondrial DNA does not, however, combine with the DNA from the father. Therefore, all of us have in our cells mitochondria with DNA that came only from our mothers. The mitochondrial DNA in our mothers came from their mothers, and so on back into time. The African Eve researchers assume that the only changes in the mitochondrial DNA are the changes that accumulate by random mutations. By studying the rate of mutation, scientists believe they can use mitochondrial DNA as a kind of clock, relating numbers of mutations to numbers of years. And by looking at the mitochondrial DNA in different human populations in various parts of the world, scientists believe they can sort out which group is the parent group for the others.


They believe that the parent group, which must also be the oldest group, can be identified by computer programs that sort the population groups into branching tree patterns. Out of the many statistical trees that can be generated, the shortest one, the one with the least number of branchings, is called the “maximum parsimony tree,” and researchers believe it to be identical to the actual historical relationships of the various population groups in the tree. The branch (“clade”) forming the base of the tree (the “basal clade”) is supposed to be the parent group. According to evolutionary theory, it should, in addition to being at the base of the tree, have the most variation (i.e. the most mutations) in its mitochondrial DNA, relative to the other population groups. So in this way, researchers believe they can find where and when the root population existed. But some scientists say that the clock is not very accurate and that the genetic information contained in the mitochondrial DNA in today’s populations is not sufficient to tell us with certainty the geographical location of the first human population.


In one of the original African Eve reports (Cann et al. 1987), researchers analyzed the mitochondrial DNA from groups of modern humans from different regions throughout the world. They analyzed the sequence of nucleotide bases found in a particular section of the mitochondrial DNA in all of the individuals being studied. They then used a computer program to arrange the various kinds of mitochondrial DNA sequences (called haplotypes) into a tree. According to the report, the root (or basal clade) of the maximum parismony tree of haplotypes was the African group. But Templeton (1993, p. 52) pointed out that Maddison (1991) had rerun the data and found ten thousand trees that were shorter (i.e. more parsimonious) than the “maximum parsimony tree” reported by the African Eve researchers. Many of these trees had mixed African/Asian roots. Analyzing another “African Eve” report (Vigilant et al. 1991), Templeton (1992) found 1,000 trees two steps shorter than the one put forward by those researchers, who had claimed it was a “maximum parsimony” tree.All of the thousand more parsimonious trees found by Templeton in his 1992 study had non-African basal clades (Templeton 1993, p. 53). This would be consistent with accounts found in the ancient Sanskrit writings of India, which would place the original human populations on this planet in the region between the Himalayas and the Caspian Sea.


Why such different results? Templeton (1993, p. 52), considering another African Eve report, explained: “Computer programs . . . cannot guarantee that the maximum parsimony tree will be found when dealing with such large data sets as these because the state space is too large to search exhaustively. For example, for the 147 haplotypes in Stoneking, Bhatia, and Wilson (1986), there are 1.68 x 10294 possible trees. Finding the maximum parsimony set among these many possibilities is nontrivial.” The computer programs tend to pick out a tree that is maximally parsimonious only in relation to a subset of the total number of possible trees. Which subset of trees that is selected depends on the order in which data are fed into the computer. To guard against this problem, it is necessary to randomize the sequence in which the data are entered over a series of runs. When one has done this a sufficient number of times, so as to find the maximum parsimony trees for various local subsets of the data, then one can compare these trees and arrive at a conclusion. This was not done in the original African Eve studies (the computer program was run only once), and thus the conclusions are not reliable. Also, even data randomization techniques do not completely solve the problem (Templeton


1993, p. 53). So this means that it really is not possible to conclusively determine the common geographical origin of dispersed human populations from the genetic data available today.


In addition to presenting inaccurate conclusions about maximum parsimony trees with African basal clades, the African Eve researchers (Cann et al. 1987; Vigilant et al. 1991) also made misleading statements about the level of mitochondrial DNA diversity in various populations. The African Eve researchers assumed that mutations occur at some fixed rate, and therefore the population with the most internal diversity, relative to the others, should be the oldest. Because the African populations had a higher level of internal diversity than Asian and European populations, the researchers claimed that the African populations were the oldest. But Templeton (1993, p. 56) noted that “no statistical test is presented in either paper in support of this claim.” He pointed out that when proper statistical methods are applied, there is no significant degree of diversity in the mitochondrial DNA of Africans, Europeans, and Asians (Templeton 1993, p. 57). As Templeton himself put it: “The apparent greater diversity of Africans is an artifact of not using sufficient statistics for making inference about the . . .process that led to the present-day human populations. In summary, the evidence for geographical origin is ambiguous. . . . there is no statistically significant support for an African origin with any mtDNA data” (Templeton 1993, p. 57).


Now let us consider the ages for the antiquity of anatomically modern humans proposed by the original African Eve theorists. They tried to calculate the time it took for the observed mtDNA diversity in today’s human populations to accumulate, based on rates of mutation. This time is called “the time to coalescence,” the time at which all the mtDNA sequence diversity in present human populations coalesces into a single past mtDNA sequence, the source of the present diversity. One group of researchers (Stoneking et al. 1986) got an age of 200,000 years for Eve, within a range of 140,000 to 290,000 years, using intraspecific calculations for the molecular clock. Intraspecific means that they based calculations on rates of mutations in human populations only. Another group (Vigilant et al. 1991), using interspecific calculations, also got an age of 200,000 years for Eve, but with a range of 166,000 to 249,000 years. Interspecific means they based their calculations on assumptions about the time at which the human line separated from the chimpanzee line.


First, let us consider the report from the researchers who relied on interspecific calibration of the rate of mutation (Vigilant et al. 1991). Their calibration of the mutation rate was made using either 4 million or 6 million years as the time since the human line supposedly diverged from the chimpanzee line. These times of divergence, when used in calculations that take into account statistical uncertainty, give times of coalescence for human mtDNA of 170,000 and 256,000 years respectively (Templeton


1993, p. 58). But Gingerich (1985) estimated that the divergence between humans and chimps took place 9.2 million years ago. A rate of change based on this date, would greatly increase the time to coalescence for modern mtDNA diversity, making it as much as 554,000 years (Templeton


1993, pp. 58–59). Furthermore, Lovejoy and his coworkers (1993) pointed out that Vigilant et al. (1991) made a mathematical error (they used the wrong transition-transversion), which when corrected gives an age for Eve of at least 1.3 million years (Frayer et al. 1993, p. 40).


It is easy to see that this whole “molecular clock” business is extremely unreliable, because it is based on speculative evolutionary assumptions. It is not at all certain that humans and chimps had a common ancestor of the kind proposed by Darwinian evolutionists. And, as we have seen, even if we assume that chimps and humans did have a common ancestor, the time at which they diverged from that common ancestor is not known with certainty, thus leading to widely varying calibrations of mutation rates and widely varying age estimates for the time to coalescence of modern mitochondrial DNA diversity.


Now let’s consider the conclusions of those who relied on intraspecific calculations—i.e. the rate that mutations accumulate in humans, without any reference to a supposed time of divergence between the chimpanzee and human lines. Templeton pointed out that this methodology did not take into account several “sources of error and uncertainty.” For example, in actual fact, mutations don’t accumulate at some steady deterministic rate. The rate of mutation is a stochastic, or probabilistic, process, with a Poisson distribution. The Poisson distribution, named after the French mathematician S. D. Poisson, is used in calculating the probabilities of occurrence of accidental events (such as spelling mistakes in printed books or mutations in DNA). “In this regard,” says Templeton (1993, p. 57), “it is critical to keep in mind that the entire human species represents only one sample of the coalescent process underlying the current array of mtDNA variations. Hence, even if every human mtDNA were completely sequenced, the rate calibration were known with no error, and the molecular clock functioned exactly like an ideal Poisson process, there would still be considerable ambiguity about the time to coalescence. . . . stochasticity therefore sets an inherent limit to the accuracy of age estimates that can never be completely overcome by larger sample sizes, increased genetic resolution, or more precise rate calibration.”


Stoneking and his coauthors of a 1986 study acknowledged the problem of stochasticity but did not, says Templeton, take adequate steps to account for it. Stoneking and his coauthors estimated that the divergence among the mtDNA samples in the human populations they studied amounted to between 2 and 4 percent. How long did it take for this amount of divergence to accumulate? Stoneking and his coauthors calculated it to be about 200,000 years. But Templeton found that if probabilistic effects are properly taken into account, a figure of 290,000 years is obtained. Templeton (1993, p. 58) then pointed out that “the actual calibration points in their paper indicate a fivefold range (1.8% to 9.3%), and the work of others would indicate an even broader range (1.4% to 9.3%).” These broadened rates give times to coalescence ranging from a minimum of 33,000 years to a maximum of 675,000 years.


African Eve theorists, and others, believe that mitochondrial DNA is not subject to natural selection. This is taken to mean that the only factor influencing the differences in the mitochondrial DNA sequences in different populations is the accumulation of random mutations at some fixed rate. If this is true, then the molecular clock would be running at the same speed in different populations. But if natural selection is influencing the differences in the DNA in different populations, that would mess up the clock. For example, if in one population natural selection were eliminating some of the mutations, this would make that population appear younger than it really is. If such things do happen, there would no longer be any firm basis for attaching absolute numbers of years to a particular degree of variation, nor would there be any firm basis for making relative age judgments among different populations. There is some evidence that natural selection is in fact operating in mitochondrial DNA. For example, Templeton (1993, p. 59) points out that there is a difference in the degree of variation in the protein coding and non coding regions of the mitochondrial DNA in certain populations. If the mutation rate were neutral, this should not be the case. The rate of mutation should be the same in both the coding and noncoding parts of the mitochondrial DNA. Other researchers (Frayer et al. 1993, pp. 39–40) reach similar conclusions: “All molecular clocks require evolutionary neutrality, essential for constancy in the rate of change. But continuing work on mtDNA has documented increasing evidence for selective importance in mtDNA. For example, studies by Fos et al. (1990), MacRae and Anderson (1988), Palca (1990), Wallace (1992), and others have conclusively demonstrated that mtDNA is not neutral, but under strong selection. . . . mtDNA is a poor gear to drive a molecular clock.”


Frayer and his coauthors (1993, p. 40) also state: “Since random mtDNA losses result in pruning off the evidence of many past divergences, the trees constructed to link present populations are altered by unknown and unpredictable factors. Each of these unseen divergences is a genetic change that was not counted when the number of mutations was used to determine how long ago Eve lived. Since these changes are influenced by fluctuations in population size and the exact number of uncounted mutations depends on the particular details of the pruning process, unless the complete population history is known, there is no way to calibrate (and continually recalibrate) the ticking of the clock. Given the fact that each population has a separate demographic history (with respect to random loss events), this factor alone invalidates the use of mtDNA variation to ‘clock’ past events (Thorne and Wolpoff 1992).”


That such things happen is confirmed by the discovery of an anatomically modern human fossil from Lake Mungo, Australia, which was 62,000 years old and had mitochondrial DNA greatly different from any known from modern humans (Bower 2001). This shows that lines of mitochondrial DNA have in fact been lost, thus calling into question the accuracy of the mtDNA molecular clock.


There are other factors affecting the mtDNA diversity in today’s human populations, in various regions of the world, that can throw off the accuracy of the mtDNA clock. One such factor is population size expansion. If the population increases in one region more rapidly than in another, this can cause greater diversity in that population. But the diversity is not an indication that this population is necessarily older than (and hence the source of) other populations in other regions. Also, the diversity observed in various populations can point not to population movements from one place to another, but the movement of genes through a population that is already distributed over a wide area. And this does not exhaust the possible causes of mtDNA diversity found in different human populations. Summarizing the problem, Templeton (1993, p. 59) says: “The diversity in a region does not necessarily reflect the age of the regional population but rather could reflect the age since the last favorable mutation arose in the population, the demographic history of population, size expansion, the extent of gene flow with other populations, and so on.” In general, these factors contribute to underestimation of the age of the human species (Templeton 1993, p. 60). Sophisticated statistical methods, such as “nested cladistic analysis,” allow scientists to discriminate to some degree between the various possible models for the generation of mitochondrial DNA diversity in human populations (as between geographical expansion models and gene flow models). Applying nested cladistic analysis to human mitochondrial DNA variation, Templeton found no evidence of a massive migration out of Africa that replaced all other hominid populations. Templeton (1993, p. 65) said, “The failure of the cladistic geographical analysis to detect an out-of-Africa population expansion cannot be attributed to inadequate sample sizes or to low genetic resolution . . . Hence, the geographical associations of mtDNA are statistically significantly incompatible with the out-of-Africa replacement hypothesis.” Templeton concluded (1993, p. 70 ): “(1) the evidence for the geographical location of the mitochondrial common ancestor is ambiguous, (2) the time at which the mitochondrial common ancestor existed is extremely ambiguous but is likely to be considerably more than 200,000 years.”

Evidence from nuclear Dna

If, as supporters of the African Eve hypothesis claim, there was a population movement of anatomically modern humans out of their place of origin in Africa resulting in total replacement of the previous hominid populations in Europe and Asia, this should be supported not only by mitochondrial DNA evidence but also by DNA evidence from the cell’s nucleus. However, in his analysis of the early African Eve reports, Templeton (1993, p. 65) said, “. . . there is no single set of assumptions that allows the mtDNA and nuclear data to be compatible with an out-of-Africa replacement hypothesis.”


One group of researchers (Breguet et al. 1990) looked at variation in the B locus of the gene for the human apoprotein. According to Templeton (1993, pp. 68–69), their detailed analysis led them to conclude that “Cauca-soid populations (located from North Africa to India) were closest to the ancestral genetic stock and that worldwide genetic differentiation at this locus is best explained by westward and eastward gene flow from this geographical region and not by a sub-Saharan origin.” For researchers like myself, who are operating from a perspective influenced by the ancient Sanskrit writings of India, which posit recurrent appearances of the human species (after planetary deluges) in the Himalayan region, this is quite interesting.


More recently, researchers have found yet another problem with the African origins theory. This problem involves the globin gene cluster in humans. A gene or part of a gene at a particular location on a chromosome may appear in several different forms called alleles. One individual will have one allele, and a second individual another allele. In analyzing globin alleles in various populations, authors of a recent textbook found that the observed degree of variation implied an age much greater than 200,000 years for modern human populations. Indeed, looking at another part of the globin gene cluster, the authors stated that “two alleles from a non-coding (and therefore neutral) region have apparently persisted for 3 million years.” They concluded, “To date, it is unclear how the pattern found in the globin genes can be reconciled with a recent African origin of modern humans” (Page and Holmes 1998, p.132). The globin evidence is consistent with Puranic accounts of extreme human antiquity.


Some researchers, considering the complexities surrounding genetic data, have suggested that fossils remain the most reliable evidence for questions about human origins and antiquity: “Unlike genetic data derived from living humans, fossils can be used to test predictions of theories about the past without relying on a long list of assumptions about the neutrality of genetic markers, mutational rates, or other requirements necessary to retrodict the past from current genetic variation . . . genetic information, at best, provides a theory of how modern human origins might have happened if the assumptions used in interpreting the genetic data are correct” (Frayer et al. 1993, p. 19). I agree that genetic evidence does not always trump archeological evidence. This means that the archeological evidence for extreme human antiquity documented in Forbidden archeology provides a much needed check on the rampant speculations of genetic researchers.


So where do we stand? The whole question of human origins, analyzed from the perspective of genetic evidence, mitochondrial DNA evidence in particular, is confusing. For example, some scientists say that a small population of the genus Homo arose from australopithecus about 2 million years ago in Africa. This population developed into Homo erectus, and then spread throughout Eurasia developing into Neandertals and Neandertal-like populations. About 100,000 years ago a small population of anatomically modern Homo sapiens emerged in Africa, and then spread around the world, replacing the earlier populations of Homo erectus and Neandertals, without mixing significantly with them (Vigilant et al. 1991; Stoneking et al.1986). These anatomically modern humans then developed in different regions of the world into the different races we see today. Other scientists, looking at the same genetic, archeological, and paleontological evidence, conclude that the different races of anatomically modern humans emerged simultaneously in different parts of the world, directly from the Homo erectus and Neandertal populations in those parts of the world (Templeton 1993). According to this idea, anatomically modern humans would have emerged in large populations over wide geographical areas, not in some small founder population confined to a small area. Another group asserts that there was a small initial population of anatomically modern humans, confined to a small geographical region. But this group holds that this population differentiated into the different racial groups we see today while still confined to this small geographical area. The racial groups then are supposed to have migrated out of this area and expanded their numbers in particular parts of the world (Rogers and Jorde 1995, p. 1). In short, there is considerable confusion about the genetic evidence and what it means.

Y Chromosome evidence

In the foregoing discussion about mitochondrial DNA, I briefly mentioned nuclear DNA, the DNA found in the nucleus of human cells and gave a few examples. Let us now look carefully at another example of such evidence—the Y chromosome.


Human beings have 23 pairs of chromosomes in the nucleus of each cell. One of these pairs of chromosomes determines the sex of the individual. The pair of sex chromosomes in females is made up of two X chromosomes (XX). The pair of sex chromosomes in males is made up of one X chromosome and one Y chromosome (XY).


So, how is the sex of a particular individual determined? The repro-ductive cells (sperm and eggs) are different than the other cells in the body. Nonreproductive cells have the full complement of 23 pairs of chromosomes, for a total of 46 chromosomes. But a sperm cell or egg cell gets only half that number, just one set of 23 chromosomes instead of


23 pairs of chromosomes. When the sperm and the egg combine, the full number of chromosomes (46, or 23 pairs) is restored. When an egg is produced in a female, it will always have an X chromosome, because in the female, the pair of sex chromosomes is always XX. So when the chromosome pair XX splits to form eggs, each egg will get one X chromosome. But in the male, the pair of sex chromosomes is XY. So when the pair splits to form sperm, some of the sperm will have an X chromosome, and others will have a Y chromosome. If a sperm carrying an X chromosome combines with an egg, the fertilized egg will have an XX pair of sex chromosomes, and the egg will develop into a female child. If a sperm carrying a Y chromosome combines with an egg, the fertilized egg will have an XY pair of sex chromosomes, and the egg will develop into a male child. The Y chromosome is passed down only from father to son. Females do not carry the Y chromosome.


Certain parts of a chromosome are subject to a process called recombination, whereby parts of one chromosome are exchanged with parts of another chromosome. But a large section of the Y chromosome is not subject to such recombination. Theoretically, the only changes that accumulate in this nonrecombining part of the Y chromosome would be random mutations. The Y chromosome is the male counterpart of the mitochondrial DNA, which is passed down only from the mother and is also supposedly not subject to variation other than random mutations. The Y chromosome can therefore be used in human origins research in just about the same way as mitochondrial DNA—as a molecular clock and geographical locator. Some researchers propose that just as there was an African Eve, there was also an African Adam, or, as some call him, a “Y-guy.” As we shall see, however, the conclusions that can be drawn from Y chromosome studies are not very perfect, and therefore some researchers view “Y-guy” as “a statistical apparition generated by dubious evolutionary assumptions” (Bower


2000a).


In the May 26, 1995 issue of Science, Robert L. Dorit of Yale University and his coauthors published a study of the variation in the ZFY gene on the Y chromosomes of 38 humans from various parts of the world. They compared this variation with that found in chimpanzees. In converting the difference in the degree of variation into years, Dorit relied on the assumption that the human line separated from the chimp line about 5 million years ago. This led him to the conclusion that all the humans in his sample had a common ancestor who existed about 270,000 years ago. This differs from the usual age estimate of 200,000 years that comes from mitochondrial DNA studies (Adler 1995). However, a report in Science news (Adler 1995) pointed out that “Dorit and his coauthors acknowledge that factors other than a recent common ancestor could explain their findings” and that their conclusions relied on a lot of “background assumptions.”


In the November 23, 1995 issue of nature, Michael Hammer, of the University of Arizona at Tucson, published a study of Y chromosome variation in eight Africans, two Australians, three Japanese, and two Europeans. He concluded that they all had a common ancestor who lived


188,000 years ago. The geographical location of the common ancestor was not clearly defined. Hammer’s study also suggested that a reanalysis of Dorit’s data would give an age of 160,000 to 180,000 years for the most recent common ancestor of the individuals in the study (Ritter 1995).


In 1998, Hammer and several coauthors published a more comprehensive study of human Y chromosome variation. The time to coalescence for the observed variation was 150,000 years, and the root of the statistical tree was in the African populations. The researchers, using nested cladistic analysis methods, proposed that the Y chromosome evidence showed two migrations. One out of Africa into the Old World, and a movement back into Africa from Asia. “Thus, the previously observed high levels of Y chromosomal genetic diversity in Africa may be due in part to bidirectional population movements,” said the researchers (Hammer et al. 1998, p. 427). Hammer and another set of coworkers reached similar conclusions in a 1997 study of the YAP region of the Y chromosome (Hammer et al. 1997). The movement of Asian populations into Africa is interesting, in light of accounts from ancient Indian historical writings, which tell of the avatar Parasurama driving renegade members of the ancient Indian royal families out of India to other parts of the world, where according to some sources, they mixed with the native populations.


In the November 2000 issue of nature Genetics, Peter Underhill and his coauthors said Y chromosome data suggested that the most recent common male ancestor of living humans lived in East Africa and left there for Asia between 39,000 and 89,000 years ago. By way of contrast, mitochondrial DNA evidence suggested that our common female ancestor left Africa about 143,000 years ago. Underhill simply suggested that the Y chromosome and mitochondrial DNA rates of change are different (Bower 2000a). Henry Harpending of the University of Utah in Salt Lake City thinks the Y chromosome’s mutation rate is slower than Underhill and his coworkers reported. According to Harpending, this would bring Y Guy’s age close to that of Mitochondrial Eve (Bower 2000a). But just as the mitochondrial DNA rate of change is really not known with certainty, the Y chromosome rate of change is also not known with certainty. In an article in Science news, Bower (2000a) says, “The Y chromosome segments in the new analysis exhibit much less variability than DNA regions that have been studied in other chromosomes. Low genetic variability may reflect natural selection, in this case, the spread of advantageous Y chromosome mutations after people initially migrated out of Africa, the researchers suggest. That scenario would interfere with the molecular clock, making it impossible to retrieve a reliable mutation rate from the Y chromosome, they acknowledge.” And geneticist Rosalind M. Harding, of John Radcliffe Hospital in Oxford, England, says, “We don’t know what selection and population structure are doing to the Y chromosome. I wouldn’t make any evolutionary conclusions from [Underhill’s] data” (Bower 2000a). For example, Underhill thought that Africa was the home of the most recent common ancestor of modern humans, because the African populations in his studies showed the most diversity in their Y chromosomes. But Harding points out that this diversity could have arisen not because Africa was the home of the original human population, but because Africa was more heavily populated than other parts of the world. Also, the diversity in populations outside of Africa could have been reduced by the spreading of particularly favorable genes throughout those populations. Bower says (2000a), “If the critics are right, Y guy could be history, not prehistory.” In other words, humans could be millions and millions of years old, and the genetic diversity we see today could simply reflect some recent genetic events in that long history. The earlier results could simply have been erased with the passage of time.


The most recent Y chromosome studies demonstrate that firm conclusions about human origins based on this kind of evidence are still out of reach. A group of Chinese and American researchers (Ke et al. 2001) sampled 12,127 males from 163 populations from East Asia, checking the Y chromosomes for three markers (called YAP, M89, and M130). According to the researchers, three mutations of these markers (YAP+, M89T, and M130T) arose in Africa, and they can all three be traced to another African mutation, the M168T mutation, which arose in Africa between 35,000 and 89,000 years ago. The researchers found that all the East Asian males they tested had one of the three African mutations that came from the African M168T mutation. They took this to mean that populations that migrated from Africa completely replaced the original hominid populations in East Asia. Otherwise, some Y chromosomes without the three African markers should have been found.


As Ke and his coauthors (2001, p. 1152) said, “It has been shown that all the Y chromosome haplotypes found outside Africa are younger than 39,000 to 89,000 years and derived from Africa.” However, they noted that “this estimation is crude and depends on several assumptions.” The assumptions were not directly mentioned in their report. The authors also admitted the possibility of “selection sweep that could erase archaic Y chromosomes of modern humans in East Asia.” Furthermore, they admitted that Y chromosome data is “subject to stochastic processes, e.g., genetic drift, which could also lead to the extinction of archaic lineages.”


Ke and his coauthors (2001, p. 1152) acknowledged another problem, which they said “creates confusion.” They observed that age estimates for a most recent common ancestor arrived at by analysis of variation in mitochondrial DNA and the Y chromosome DNA differ greatly from age estimates derived from analysis of variation in the DNA of the X chromosome and autosomes (chromosomes other than the sex-determining X and Y chromosomes). They said, “The age estimated with the use of autosome/X chromosome genes ranges from


535,000 to 1,860,000 years, much older than the mtDNA and Y chromosome” (Ke et al. 2001, p. 1152). The authors speculate that in the course of population “bottlenecks” during a supposed migration out of Africa, there may have been three or four times as many men as women, leading to the greater diversity in the autosome/X chromosome DNA.


Milford Wolpoff, a committed multiregionalist, says that it’s not surprising that the Y chromosome shows an apparent African origin. Africa had the largest populations for the longest periods of time. Therefore, the African populations were responsible for the greatest number of Y chromosome lineages, which could over time have wiped out other lineages that originally existed along with the African lineages (Gibbons 2001, p. 1052). Ann Gibbons observes that it is difficult to check the reliability of the Y chromosome and mitochondrial DNA evidence. Ideally, one would want to compare this evidence with DNA evidence from many other chromosomes in the nucleus, to see if they all support the same conclusions about the age and geographical origin of anatomically modern humans. But Gibbons (2001, p. 1052) notes: “The dating of nuclear lineages is complicated because most nuclear DNA, unlike that of the mitochondria and the Y chromosome, gets scrambled when homologous chromosomes exchange their genetic material during egg and sperm formation. That makes detection of an archaic lineage so difficult that many geneticists despair they will ever be able to prove—or disprove—that replacement was complete. Says Oxford University population geneticist Rosalind Harding: ‘There’s no clear genetic test. We’re going to have to let the fossil people answer this one.’”

Humans and neandertals

As we have seen, one group of scientists says that modern human beings evolved from the ape man Homo erectus in various parts of the world, passing through a Neandertal or Neandertal-like stage. According to this view, called the multiregional hypothesis, today’s Asian people came from Asian Homo erectus, passing through a Neandertal-like stage. Similarly, today’s Europeans should be descendants of the classic Western European Neandertals.


Some scientists have compared the DNA of humans and Neandertals, seeking to clarify their evolutionary relationship. The evidence is inconclusive and subject to varying interpretations. Scientists led by Matthias Krings (1997) extracted some DNA from one of the bones of the original Neandertal specimen, discovered in Germany during the nineteenth century. The DNA was carefully analyzed to make sure it was from the bone itself, and not from modern human contamination. The DNA was mitochondrial DNA, which is passed down directly from mother to child.


Researchers compared the fragment of Neandertal mitochondrial DNA with mitochondrial DNA from 1600 modern humans from Europe, Africa, Asia, the Americas, Australia, and Oceania. The fragment of Neandertal mitochondrial DNA used in the comparison was composed of 327 nucleotide bases. Similar stretches of the modern human mitochondrial DNA samples differed from the Neandertal mitochondrial DNA sample by an average of 27 out of 327 nucleotide bases. The 1600 modern humans differed from each other by an average of 8 nucleotide bases out of 327. Chimpanzees differed from modern humans by 55 out of 327 nucleotide bases. Scientists took all this to mean that Neandertals are not closely related to modern humans. If they had been closely related to humans, the differences in nucleotide bases between humans and Neandertals should have been just slightly more than the average difference among humans—perhaps 10 or 12 nucleotide bases.


The scientists who looked at the DNA from the original Neandertal bones found it was no closer to today’s Europeans than to any other group of modern humans. They took this as contrary to the theory that the modern European populations evolved from the European Neandertals. According to this line of reasoning, the Neandertal DNA evidence favors the “out of Africa” hypothesis, which says that modern humans arose only once in Africa, about 100,000 years ago, and then spread to Europe and Asia, replacing the Neandertal-type hominids without breeding with them in any significant numbers. However, the researchers said about their mitochondrial DNA evidence: “These results do not rule out the possibility that Neandertals contributed other genes to modern humans” (Krings et al. 1997, p. 27).


The group of Neandertal DNA researchers headed by Krings proposed an age for the split between the Neandertals and the line of hominids that led to modern humans.They assumed that the human and chimp lines split at four or five million years ago, a figure based on rates of mutation in mitochondrial DNA. Using this as a starting point, they estimated the human/Neandertal split took place betweeen 550,000 and 690,000 years ago. But they acknowledged the possibility of “errors of unknown magnitude” (Krings et al. 1997, p. 25). In other words, the date is speculative. Furthermore, it is based on the assumption that there is an evolutionary connection between humans, chimps, Neandertals, etc., and that the relations reflected in their DNA are also relations of biological descent. But this is simply an assumption.


After the work done by Krings and his coworkers, William Goodwin, a geneticist at the University of Glasgow in Scotland, sequenced some mitochondrial DNA from the bones of an infant Neandertal discovered in the Mezmaikaya Cave, in the northern Caucusus Mountains (Bower 2000b). The bones are thought to be 29,000 years old. Goodwin compared the mitochondrial DNA from the Caucasus Neandertal infant to the mitochondrial DNA from the original German Neandertal (Krings et al. 1997). He found about the same amount of difference between them as between samples of mitochondrial DNA from modern humans. In other words, the two Neandertals were genetically close to each other. Furthermore, the mitochondrial DNA from the Caucasus Neandertal differed from modern humans by about the same amount as the German Neandertal, indicating that the Caucusus Neandertal, like the German one, was genetically distinct from modern humans. Goodwin said this supports the out of Africa replacement model of modern human origins. But Milford H. Wolpoff, a supporter of the multiregional modern human origins hypothesis, suggested that mitochondrial DNA from anatomically modern humans from the same time, 30,000 years ago, would differ from the mitochondrial DNA of today’s modern humans by the same amount as the Neandertal DNA. This could be tested by DNA from Homo sapiens living at that time, 30,000 years ago.


In the June 2000 issue of american Journal of Human Genetics, Lutz Bachmann and his colleagues at the Field Museum, Chicago, announced the results of studies of the nuclear DNA from two Neandertals and from anatomically modern Homo sapiens who existed 35,000 years ago. Using the DNA hybridization technique, which shows the degree of bonding between samples, they determined that the Homo sapiens DNA differed from the Neandertal DNA. This tends to support the work of Krings et al. and Goodwin. But anthropologist Erik Trinkaus disagreed. He pointed out that the DNA hybridization technique gives only a very crude measure of difference. He also said that there is a lot of subjectivity in judging what amount of difference in DNA amounts to a difference in species. Trinkaus believes that humans and Neandertals interbred (implying that their DNA was similar). However, he asserted that the genetic evidence for this interbreeding may have become diluted so much as to escape detection by crude DNA hybridization techniques (Bower 2000c).


New mitochondrial DNA studies have added a new element to the debate about the relationship between modern humans and the Neandertals. A team led by Gregory J. Adcock, of the Pierre and Marie Curie University in Paris, examined mitochondrial DNA samples from anatomically modern human skeletons, ranging from 2,000 years old to


62,000 years old. The mitochondrial DNA from the oldest skeleton, from Lake Mungo, Australia, turned out to be more different from that of living humans than the mitochondrial DNA of the Neandertals mentioned above (Bower 2001). Therefore, even if Neandertal DNA is quite different from modern human DNA, this does not necessarily mean that Neandertals did not interbreed with the anatomically modern humans.


Even so, the exact nature of the relationship between modern humans and Neandertals remains an open question. Perhaps humans and Neandertals are simply varieties of the same species. Perhaps they are different species, who interbred. If we ignore evolutionary speculations, the Neandertal DNA research simply shows that modern humans and Neandertals coexisted with each other. From the available genetic evidence, it is not possible to put any definite limit on how far back in time the coexistence actually goes. This is consistent with the views presented in Forbidden archeology, which posits the coexistence of anatomically modern humans and other distinct hominid types for vast periods of time.

Conclusion

Biochemical and genetic evidence is not as reliable as some would have us believe. Many researchers say that the fossil evidence is ultimately more important than the genetic evidence in answering questions about human origins and antiquity. As Frayer and his coauthors (1993, p. 19) said, “Unlike genetic data derived from living humans, fossils can be used to test predictions of theories about the past without relying on a long list of assumptions about the neutrality of genetic markers, mutational rates, or other requirements necessary to retrodict the past from current genetic variation . . . genetic information, at best, provides a theory of how modern human origins might have happened if the assumptions used in interpreting the genetic data are correct.” Contemplating the difficulties of using genetic evidence to establish theories of human origins and antiquity, Oxford University population geneticist Rosalind Harding said, “There’s no clear genetic test. We’re going to have to let the fossil people answer this one” (Gibbons 2001, p. 1052). And when we do look at the fossil evidence in its entirety, we find that anatomically modern humans go so far back in time that it becomes impossible to explain their presence on this planet by current Darwinian theories of evolution. Furthermore, when we look at human origins in terms of the larger question of the origin of life on earth, we find that modern science has not been able to tell us how the first living things, with their genetic systems, came into existence.


Also, both artificial intelligence (AI) and artificial life (Alife) researchers have failed to provide convincing models of living things. Rodney Brooks, of the Artificial Intelligence Laboratory at MIT, wrote in a perceptive article in nature: “Neither AI or Alife has produced artifacts that could be confused with a living organism for more than an instant. AI just does not seem as present or aware as even a simple animal and Alife cannot match the complexities of the simplest forms of life”


(Brooks 2001, p. 409). Brooks attributes the failure to something other than lack of computer power, incorrect parameters, or insufficiently complex models. He raises the possibility that “we are missing something fundamental and currently unimagined in our models.” But what is that missing something? “One possibility,” says Brooks (2001, p. 410), “is that some aspect of living systems is invisible to us right now. The current scientific view of things is that they are machines whose components are biomolecules. It is not completely impossible that we might discover new properties of biomolecules, or some new ingredient. . . . Let us call this the ‘new stuff’ hypothesis—the hypothesis that there might be some extra sort of ‘stuff’ in living systems outside our current scientific understanding.” And what might this new stuff be? Brooks gives David Chalmers as an example of a philosopher who proposes that consciousness might be a currently unrecognized state of matter. But Brooks (2001, p.


411) goes on to say, “Other philosophers, both natural and religious, might hypothesize some more ineffable entity such as a soul or elan vital—the ‘vital force.’” Going along with such philosophers, I would propose that both a soul (conscious self) and vital force are present in humans and other living things. This conscious self and vital force are necessary components in any explanation of living things and their origins.


Beyond Stones and Bones:


Загрузка...