Friday, November 24, 2006

Biologists, By And Large, Are Simply Unqualified To Judge The Plausibility Of Unguided Evolution

Granville Sewell:

When Dr. Behe was at the University of Texas El Paso in May of 1997 to give an invited talk, I told him that I thought he would find more support for his ideas in mathematics, physics and computer science departments than in his own field. I know a good many mathematicians, physicists and computer scientists who, like me, are appalled that Darwin's explanation for the development of life is so widely accepted in the life sciences. Few of them ever speak out or write on this issue, however--perhaps because they feel the question is simply out of their domain. However, I believe there are two central arguments against Darwinism, and both seem to be most readily appreciated by those in the more mathematical sciences.

...

1. The cornerstone of Darwinism is the idea that major (complex) improvements can be built up through many minor improvements; that the new organs and new systems of organs which gave rise to new orders, classes and phyla developed gradually, through many very minor improvements.

...

Behe's book is primarily a challenge to this cornerstone of Darwinism at the microscopic level. Although we may not be familiar with the complex biochemical systems discussed in this book, I believe mathematicians are well qualified to appreciate the general ideas involved. And although an analogy is only an analogy, perhaps the best way to understand Behe's argument is by comparing the development of the genetic code of life with the development of a computer program. Suppose an engineer attempts to design a structural analysis computer program, writing it in a machine language that is totally unknown to him. He simply types out random characters at his keyboard, and periodically runs tests on the program to recognize and select out chance improvements when they occur. The improvements are permanently incorporated into the program while the other changes are discarded. If our engineer continues this process of random changes and testing for a long enough time, could he eventually develop a sophisticated structural analysis program? (Of course, when intelligent humans decide what constitutes an "improvement", this is really artificial selection, so the analogy is far too generous.)

If a billion engineers were to type at the rate of one random character per second, there is virtually no chance that any one of them would, given the 4.5 billion year age of the Earth to work on it, accidentally duplicate a given 20-character improvement. Thus our engineer cannot count on making any major improvements through chance alone. But could he not perhaps make progress through the accumulation of very small improvements? The Darwinist would presumably say, yes, but to anyone who has had minimal programming experience this idea is equally implausible. Major improvements to a computer program often require the addition or modification of hundreds of interdependent lines, no one of which makes any sense, or results in any improvement, when added by itself. Even the smallest improvements usually require adding several new lines. It is conceivable that a programmer unable to look ahead more than 5 or 6 characters at a time might be able to make some very slight improvements to a computer program, but it is inconceivable that he could design anything sophisticated without the ability to plan far ahead and to guide his changes toward that plan.

If archeologists of some future society were to unearth the many versions of my PDE solver, PDE2D , which I have produced over the last 20 years, they would certainly note a steady increase in complexity over time, and they would see many obvious similarities between each new version and the previous one. In the beginning it was only able to solve a single linear, steady-state, 2D equation in a polygonal region. Since then, PDE2D has developed many new abilities: it now solves nonlinear problems, time-dependent and eigenvalue problems, systems of simultaneous equations, and it now handles general curved 2D regions. Over the years, many new types of graphical output capabilities have evolved, and in 1991 it developed an interactive preprocessor, and more recently PDE2D has adapted to 3D and 1D problems. An archeologist attempting to explain the evolution of this computer program in terms of many tiny improvements might be puzzled to find that each of these major advances (new classes or phyla??) appeared suddenly in new versions; for example, the ability to solve 3D problems first appeared in version 4.0. Less major improvements (new families or orders??) appeared suddenly in new subversions, for example, the ability to solve 3D problems with periodic boundary conditions first appeared in version 5.6. In fact, the record of PDE2D's development would be similar to the fossil record, with large gaps where major new features appeared, and smaller gaps where minor ones appeared. That is because the multitude of intermediate programs between versions or subversions which the archeologist might expect to find never existed, because-- for example--none of the changes I made for edition 4.0 made any sense, or provided PDE2D any advantage whatever in solving 3D problems (or anything else) until hundreds of lines had been added.

Whether at the microscopic or macroscopic level, major, complex, evolutionary advances, involving new features (as opposed to minor, quantitative changes such as an increase in the length of the giraffe's neck*, or the darkening of the wings of a moth, which clearly could occur gradually) also involve the addition of many interrelated and interdependent pieces. These complex advances, like those made to computer programs, are not always "irreducibly complex"--sometimes there are intermediate useful stages. But just as major improvements to a computer program cannot be made 5 or 6 characters at a time, certainly no major evolutionary advance is reducible to a chain of tiny improvements, each small enough to be bridged by a single random mutation.

2. The other point is very simple, but also seems to be appreciated only by more mathematically-oriented people. It is that to attribute the development of life on Earth to natural selection is to assign to it--and to it alone, of all known natural "forces"--the ability to violate the second law of thermodynamics and to cause order to arise from disorder. It is often argued that since the Earth is not a closed system--it receives energy from the Sun, for example-- the second law is not applicable in this case. It is true that order can increase locally, if the local increase is compensated by a decrease elsewhere, ie, an open system can be taken to a less probable state by importing order from outside. For example, we could transport a truckload of encyclopedias and computers to the moon, thereby increasing the order on the moon, without violating the second law. But the second law of thermodynamics--at least the underlying principle behind this law--simply says that natural forces do not cause extremely improbable things to happen**, and it is absurd to argue that because the Earth receives energy from the Sun, this principle was not violated here when the original rearrangement of atoms into encyclopedias and computers occurred.

The biologist studies the details of natural history, and when he looks at the similarities between two species of butterflies, he is understandably reluctant to attribute the small differences to the supernatural. But the mathematician or physicist is likely to take the broader view. I imagine visiting the Earth when it was young and returning now to find highways with automobiles on them, airports with jet airplanes, and tall buildings full of complicated equipment, such as televisions, telephones and computers. Then I imagine the construction of a gigantic computer model which starts with the initial conditions on Earth 4 billion years ago and tries to simulate the effects that the four known forces of physics (the gravitational, electromagnetic and strong and weak nuclear forces) would have on every atom and every subatomic particle on our planet (perhaps using random number generators to model quantum uncertainties!). If we ran such a simulation out to the present day, would it predict that the basic forces of Nature would reorganize the basic particles of Nature into libraries full of encyclopedias, science texts and novels, nuclear power plants, aircraft carriers with supersonic jets parked on deck, and computers connected to laser printers, CRTs and keyboards? If we graphically displayed the positions of the atoms at the end of the simulation, would we find that cars and trucks had formed, or that supercomputers had arisen? Certainly we would not, and I do not believe that adding sunlight to the model would help much. Clearly something extremely improbable has happened here on our planet, with the origin and development of life, and especially with the development of human consciousness and creativity.

It's sort of interesting to me that in any other science, an assessment of miniscule probability means that some other explanation must be found, but in Darwinism, the response to such a probability is "Unlikely things happen all the time! This result proves that Natural Selection has awesome power to overcome vanishingly small probabilities!"

Pure sophistry.

Here's an amusing comment to the Uncommon Descent post that led me to the Sewell essay:

8. russ // Nov 24th 2006 at 4:30 pm

Chris Hyland // Nov 24th 2006 at 2:53 pm

I can’t speak for Sewell in particular, but in my experience scientists are angered by the claims that the ID movement has overturned decades of scientific research without actually doing any of their own.

Comment by Chris Hyland — November 24, 2006 @ 2:53 pm

How many ounces of gold were produced by the critics of alchemy? Were alchemists angered by the man on the street who scoffed at their enterprise?

Comment by russ — November 24, 2006 @ 4:30 pm

Hyland once again illustrates the common attitude of Darwinists. You see, they own all of the data of empirical science and they alone have total freedom to (mis)interpret it. No one else has the right to interpret published data. Other people have to go and get their own data. By this standard Max Planck was a thief because he found the quantum mechanical explanation of blackbody radiation without doing any of his own measurements, unfairly overturning decades of hard-won research based on assumptions of classical electromagnetism. I guess he did so out of shear laziness, piggybacking on the hard work of others without lifting a finger himself. Planck should be ashamed.

1 comment:

Anonymous said...

Rather, it is mathematicians and engineers who are, by and large, unqualified to comment on the plausibility of evolutionary change. Consider as evidence, the naive rejection of argument that local order CAN be created with the input of energy, as from te sun. Thermodynamics does not require intelligence in order to create such local states of order. Arguments involving encyclopedias being transported to the moon are not pertinent to the discussion.

(Oh, by the way, goodly number biologists I have met could beat the pants off of most engineers when it comes to using theoretical mathematics.)

--A biologist, evolutionist, and a Christian