Evolutionists attempt to explain away complex specified information (CSI) in living things by use of evolutionary algorithms (step-by-step procedures for calculations).
These algorithms aim to generate an outcome by use of a chance process and sifting with a law-like process. But evolutionary algorithms fall at the first hurdle, because none of them can account for the origin of life, nor how it first self-replicated. But that aside, once life started and somehow multiplied itself, evolutionary algorithms are then given as the explanation for CSI. But even in this they fail.
Richard Dawkins’ famous algorithm in ‘The Blind Watchmaker’ (pp. 47-48) demonstrates the problem. He claims to show how a cumulative selection process acting on chance can produce CSI. He starts with the following complex and specified target sequence:
The probability of getting this right by complete chance on the first try, e.g. by monkeys typing randomly on a keyboard, would be around 10-40, and so to stand a better than even chance of getting it would take about 1040 tries (10 with 40 zeroes after it!).
Elsewhere Dawkins claims that, because we have millions of years of history, such large numbers don’t matter – evolution will get there in the end. But he conveniently ignores the fact that, in the real world, over such long periods of time the conditions and environment would change or even be destroyed. For example, our monkeys would die before getting it right (or give up due to boredom, hunger or RSI!) and the keyboard would no doubt have broken much earlier than that. This reduces the chances of getting the sequence right even further, to the point of impossibility.
Aware of the problem, Dawkins replaces chance by using the following evolutionary algorithm, using a randomly selected sequence of 28 capital letters and spaces:
This just happens to comprise the same number of characters as Dawkins’ target sequence (28), so he is loading the dice in the first place.
Then Dawkins allows all the letters and spaces to be altered in a randomly-generated sequence, and whenever a change happens to match a letter in the target sequence, he preserves it and continues the random selection with only the remaining letters. This continues until he hits the target in only 43 steps, and on average it takes only about 40 tries via an evolutionary algorithm.
So, problem solved, he concludes. But the reality is far from it. First of all, what are the chances, in nature, of each changed letter actually being preserved? To be specific to evolution, what are the chances of a useful mutation being preserved by natural selection? They can only be selected if they give an organism a survival advantage, and the vast majority of mutations don’t. Quite the opposite – they are nearly always disadvantageous, even if they might be useful in a different environment.
Also, because of the way Dawkins designed the algorithm, it’s always going to end up with the target sequence, with absolute certainty or ‘maximum probability’. It therefore has minimal complexity, because in real life complexity and probability vary inversely: the greater the complexity, the smaller the probability. So if evolutionary algorithms cannot generate real complexity, they can’t generate specified complexity either.
Finally, Dawkins designed his algorithm to get his required outcome, whereas evolution does not have a target sequence, so all he has done is demonstrate the necessity of intelligent design!