The Problem with the Fine-Tuning Argument: An Excerpt from Victor Stenger’s Last Book God and the Multiverse September 9, 2014

The Problem with the Fine-Tuning Argument: An Excerpt from Victor Stenger’s Last Book God and the Multiverse

Before he suddenly died two weeks ago, Dr. Victor J. Stenger was cranking out large books every year, tackling topics both large (the universe) and small (atoms).

His final book (as far as we know) is officially released today. It’s called God and the Multiverse: Humanity’s Expanding View of the Cosmos and Stenger, as always, makes difficult concepts accessible to those of us without strong science backgrounds.

The excerpt below, reprinted with permission of Prometheus Books, explains and thoroughly dismantles the “fine-tuning argument.” (I’ve excluded footnotes.)


In recent years, many theologians and Christian apologists have convinced themselves and their followers that they have a knockdown, drag-out scientific argument for the existence of God. They claim that the parameters of physics are so finely tuned that if any one of these parameters were just slightly different in value, life — and especially human life — would have not been possible anywhere in the universe.

Of course, like all design arguments this is a God-of-the-gaps argument that they cannot win in principle because they can never prove conclusively that the values of these parameters cannot be natural. But they keep trying.

Assuming, on no basis whatsoever, that those parameters are independent and could have taken on any value over a wide range, they conclude that the probability of a universe with our particular set of parameters is infinitesimally small. Further assuming, on no basis whatsoever, that the probability of a divine creator is not equally infinitesimally small, they conclude that such a creator existed who fine-tuned the universe for life, particularly human life. Note that there is also no basis whatsoever to assume that this creator was the personal God worshipped by Christians, Muslims, and Jews or the god of any major religion. A deist creator works equally well.

William Lane Craig summarized the argument this way in his 1998 debate with philosopher Massimo Pigliucci (and in other debates available on his website):

During the last 30 years, scientists have discovered that the existence of intelligent life depends upon a complex and delicate balance of initial conditions given in the Big Bang itself. We now know that life-prohibiting universes are vastly more probable than any life-permitting universe like ours. How much more probable?

The answer is that the chances that the universe should be life-permitting are so infinitesimal as to be incomprehensible and incalculable.

John Barrow and Frank Tipler estimate that a change in the strength of gravity or of the weak force by only one part in 10100 would have prevented a life-permitting universe. There are around 50 such quantities and constants present in the Big Bang which must be fine-tuned in this way if the universe is to permit life. And it’s not just each quantity which must be exquisitely fine-tuned; their ratios to one another must be also finely-tuned. So improbability is multiplied by improbability by improbability until our minds are reeling in incomprehensible numbers.

Just because Craig’s mind reels and he personally can’t comprehend the numbers, it does not follow that they are in fact incomprehensible to the rest of us.

I have written extensively on the subject of fine-tuning but need to cover some of it again here for completeness. I will try to bring the subject up to date without, I hope, being too repetitive. Recently I contributed a chapter arguing against fine-tuning for an Oxford University Press anthology Debating Christian Theism and will refer to some of that material. Christian philosopher Robin Collins of Messiah College presented the case for fine-tuning in an accompanying chapter. In it he criticizes a number of my previous arguments, to which I will respond here. These responses have not appeared elsewhere.

The multiverse provides a very simple, purely natural, solution to the fine-tuning problem. Suppose our universe is just one of an unlimited number of individual universes that extend for an unlimited distance in all directions and for an unlimited time in the past and future. If that’s the case, we just happen to live in that universe that is suited for our kind of life. Our particular universe is not fine-tuned to us; we are fine-tuned to it.

The multiverse explanation is adequate to refute fine-tuning. Note the multiverse does not need to be proved to exist to refute fine-tuning claims. It just must be plausible. Those who dispute this have the burden of proving otherwise. This they have not done.

Nevertheless, the multiverse remains unverified so it behooves us to continue to examine the credibility of the divine fine-tuning hypothesis for our single, lone universe.

In a book published in 2011 titled The Fallacy of Fine Tuning, I provided purely natural explanations for the values of the so-called fine-tuned parameters that appear most frequently in the theistic literature.

Many authors have written on fine-tuning, often misleadingly referred to as the anthropic principle, which suggests it has something to do with human beings. They insist certain parameters are fine-tuned to exquisite precision. And, by “exquisite precision,” they don’t just mean within an order of magnitude or 10 percent, or even 1 percent. Rather, they assert that some parameters must be tuned to one part in fifty to a hundred orders of magnitude for any life to be possible.

Before I get to the specific parameters that are supposedly so fine-tuned, let me say a word about my basic interpretation of their meaning. The models of physics are human constructions, and so it follows that the quantities, parameters, and “laws” that appear in these models are likewise constructed. It strikes me as somewhat incongruous to think of them as fine-tuned by God or nature. Physicists in another galaxy might have their own models with a totally different set of parameters.

Thus the parameters that are supposedly fine-tuned need not have any specific ontological significance. Of course, the models must agree with observations and so, as I have emphasized, they must have something to do with whatever objective reality is out there. They are not arbitrary, just as a landscape painting is not a random splash of colors (unless it’s by Jackson Pollock).

Let us now look at the specifics. Physicist and Christian apologist Hugh Ross lists twenty-nine “characteristics of the universe” and forty-five characteristics of the solar system “that must be fine-tuned for any kind of life to be possible.” Right off the bat this statement is incorrect. More than half of Ross’s parameters do not address life in general but life on this planet alone and, in some cases, even more particularly to human life.

The most common fallacy made by Ross and others who agree with his position is to single out the carbon-based life we have on Earth and assume that it is the only possible type of life. According to Christian belief, humans are made in the image of God (Genesis 1:26), so it is not surprising that they find it difficult to imagine other life-forms. However, with only one example available, they simply do not have the data to allow them to conclude that all other forms of life are impossible, whether based on carbon chemistry or not.

Referring to the possibility that the parameters can vary randomly, Collins asks, “Why should they give rise to precisely the right set of laws required for life?” Well, that’s the whole point. They didn’t have to be precise to lead to some form of life somewhere in this vast universe. In The Fallacy of Fine Tuning I showed that wide ranges of physical parameters could plausibly lead to conditions, such as long ages of stars, that could in principle allow for the evolution of life of one form or another.


Two of the parameters that appear in most lists of fine-tuned quantities are

• the speed of light in a vacuum, c, and
• Planck’s constant, h.

As basic as these parameters are to physics, their values are arbitrary. The fundamental unit of time in physics is the second. As we saw in chapter 6, the units for all other measurable quantities in physics, except for those that are dimensionless, are defined relative to the second. The value of c is chosen to define what units will be used to measure distance. To measure distance in meters you choose c = 3 × 108. To measure distance in light-years you choose c = 1.

The value of Planck’s constant h is chosen to define what units will be used to measure energy. To measure energy in joules you choose h = 6.626 × 10-34. To measure energy in electron-volts you choose h = 4.136 × 10-15. Physicists like to work in what they call “natural units,” where ħ = h/2π = c = 1. Other arbitrary quantities that are often claimed to be fine-tuned include Boltzmann’s constant, kB, which simply converts from units of absolute temperature, degrees Kelvin, to energy, and Newton’s gravitational constant, G, which also depends on the choice of units. In Planck units, G = 1.


Less trivially, let us look at five parameters that are claimed by theists to be so finely tuned that no form of life could exist in any universe in which any of the values differed by an infinitesimal amount from their existing values in our universe. These are:

• The ratio of electrons to protons in the universe
• The ratio of the electromagnetic force to gravity
• The expansion rate and mass density of the universe
• The cosmological constant

The ratio of electrons to protons in the universe

Ross asserts that if this ratio were larger, there would be insufficient chemical binding. If smaller, electromagnetism would dominate gravity, preventing galaxy, star, and planet formation.

The fact that the ratio is exactly equal to one can be easily explained. The number of electrons in the universe should equal the number of protons from charge conservation, on the reasonable expectation that the total electric charge of the universe is zero. While there are other charged particles in the standard model, the proton and electron are the only ones that are stable.

The ratio of electromagnetic force to gravity

Ross says that if this ratio were larger, there would be no stars less than 1.4 solar masses and hence short and uneven stellar burning. If it were smaller, there would be no stars more than 0.8 solar masses and hence no heavy element production.

The ratio of the forces between two particles depends on their charges and masses. As I have already remarked, despite the statement often heard in most (if not all) physics classrooms — that gravity is much weaker than electromagnetism — there is no way one can state absolutely the relative strengths of gravity and any other force. Indeed, if one were to define the strength of gravity using the only natural mass, the Planck mass, you find that gravity is 137 times stronger than electromagnetism.

The reason gravity is so weak in atoms is the small masses of elementary particles. This can be understood to be a consequence of the standard model of elementary particles in which the bare particles all have zero masses and pick up small corrections by their interactions with other particles.

Collins misunderstands this point when he writes: “Stenger’s attempt to explain away this apparent fine-tuning [the low mass of the proton and neutron] is like someone saying protons and neutron are made up of quarks and gluons, and since the latter masses are small, this explains the smallness of the former masses.”

This is a complete misrepresentation of my position. Nowhere have I used this argument. Collins provides no direct quotation or citation. In truth, I make the very reasonable assumption, based on the standard model, that all the elementary particles (the proton and neutron are not elementary) were massless when they were first generated in the early universe. All have low masses today, compared to the Planck mass, since those masses were just small corrections provided by the Higgs mechanism. And, before Collins complains that the Higgs mechanism is another arbitrary assumption, recall that it is part of the standard model, which emerged undesigned from the symmetries of emptiness and the randomness of symmetry breaking.

The expansion rate and mass density of the universe

Ross claims that if the expansion rate of the universe, given by the Hubble parameter H, were larger, there would be no galaxy formation; if smaller, the universe would collapse prior to star formation. He also asserts that if the average mass density of the universe were larger, there would be too much deuterium from the big bang and stars would burn too rapidly. If it were smaller, there would be insufficient helium from the big bang and too few heavy elements would form.

In chapter 12, we saw that inflation results in the mass density of the universe being very close to the critical value Pc. This, in turn, implies that H also has a critical value. Only one of the two parameters is adjustable. Let’s assume it’s H.

Now, in the approximation of a linear expansion given by Hubble’s law (see chapter 8), the age of the universe is given by T = 1/H. This is currently 13.8 billion years and is hardly fine-tuned for life. Life could just as well have evolved for T = 12.8 billion years or T = 14.8 billion years. In fact, suppose T = 1.38 billion years. Then we could not have life now, but it would come along ten billion years or so later. Or, suppose T = 138 billion years. The life will have already appeared 124 or so billion years earlier.

The cosmological constant

The cosmological constant is equivalent to an energy density of the vacuum and is the favorite candidate for the dark energy, which is responsible for the acceleration of the universe’s expansion — constituting over 68 percent of the total mass/energy of the universe.

We saw in chapter 13 that calculations of the energy density of the vacuum that assume it equals the zero-point energy give answers that are 50–120 orders of magnitudes larger than the maximum value allowed by observations.

Physicists have not reached a consensus on the solution to the cosmological-constant problem. Some prominent figures, such as Steven Weinberg and Leonard Susskind, think the answer lies in multiple universes. Both refer to the fact that string theory, or its more advanced version called M-theory, offers a “landscape” of perhaps 10500 different possible universes. But we have no need for such speculation.

As I pointed out in chapter 13, the original energy-density calculations incorporated a fundamental error by summing all the states in a given volume. Since the entropy of a system is given by the number of accessible states of the system, the entropy calculated by summing over the volume will be greater than the entropy of a black hole of the same size, which depends on its area rather than its volume. But since we cannot see inside a black hole, the information that we have about what is inside is as small as it can be and so the entropy is as large as it can be.

Therefore, it was a mistake to calculate the number of states by summing over the volume. Correcting this by summing over the area, or, equivalently, setting the number of states equal to the entropy of a black hole equal to the size of the volume, we can naturally constrain the vacuum energy density. This calculation yields the result that an empty universe will have a vacuum energy density about equal to the critical density, just the value it appears to have.

For technical reasons, cosmologists are not ready to accept this solution to the cosmological-constant problem. Nevertheless, I think it is fair to conclude that the original calculation is simply wrong — as far wrong as any other calculation in the history of physics — and should be ignored. In any case, don’t give up all your worldly goods and enter a monastery or convent because the cosmological constant is so small.

God and the Multiverse is available online and in bookstores beginning today.

Browse Our Archives

What Are Your Thoughts?leave a comment
error: Content is protected !!