The question we discuss is: How useful is contemporary science as a source of knowledge? There are good reasons to be cautious about its various theories and proclamations. So many reasons, in fact, that this is a two part article.

Let’s begin with the problem of suggestibility.


As normal human beings we are suggestible. Throughout our life we have received many suggestions that originated with contemporary science. Mostly, we believed these suggestions without question.

Most of the things we think we know we simply accepted “in good faith.” For example, ask anyone who is not in the Work about “how life came to be” and you will normally be treated to a mishmash of ideas that revolve around Darwinian evolution. Most likely the person describing these ideas will not have studied the field at all and will simply have accepted “in good faith” what they were taught at school, or encountered in the media, or have read about in books and magazines. They are unlikely to provide a critical view.

To consider the opinions of another to have any validity, one needs to know their source. If they themselves are not the source, one needs to determine who was the original source and consider how they arrived at their opinion. An agreement with the opinion requires either a review of how the original opinion was arrived at, or one’s own analysis that arrives at the same opinion via a different route.

Contemporary science has no unity 

We might believe that the body of scientific thought constitutes a unity, allowing perhaps for the reality that this body of thought is gradually evolving. However, it has no unity; it is a consensus reflecting the opinion of those highest in the scientific hierarchy. You might think otherwise, if you peruse Wikipedia, which is a large conscientiously maintained scientific information source. However, if you choose any particular theory at random and google the topic, you will usually discover opposing theories and dissenting opinion. Wikipedia has evolved into the “official” mouthpiece of science.

There is no individual in any scientific field who is the acknowledged “master.” Even in fields where one individual’s work and opinions are dominant for a while, it is unlikely that he or she is conversant with all hypotheses and experimental results in their own field. And their expertise beyond their own field is likely to be thin or non-existent. 

“Scientific truth” is and will forever be the aggregation of many ‘I’s. 

Are scientific experiments truly repeatable? 

The repeatability of experiments is a supremely important criterion for accepting any scientific hypothesis. Some experiments certainly are repeatable. If you mix a given amount of silver nitrate with a given amount of sodium chloride at a specific temperature, you will produce a precipitate of a given amount of silver chloride. You always get the same result. So it is with some scientific experiments. However there are also many notable failures to repeat “discovered” phenomena. 

One famous area of disputed claims is the ESP research of J B Rhine, which suggested experimental “proof of telepathy.” It was never independently verified. Once you enter the area of scientific psychology, you encounter the problem of the experimenter unwittingly influencing the experiment, and the additional problem that one group of subjects is not necessarily equivalent to another. Rhine’s experiments may have suffered from both of these failings. Perhaps J B Rhine and his methods were at fault, and perhaps not. 

As Heraclitus noted “No man ever steps in the same river twice, for it’s not the same river and he’s not the same man.” 

The point is that repeatability is not easy to establish, because all the factors that influence the outcome of an experiment may not be known. 

Where you do not have repeatability, the scientific method rules itself out—in theory. In practice that important criteria is not always enforced. Some experiments, notably those carried out in the Large Hadron Collider (LHC) that is buried beneath the France-Switzerland border, escape the “rule of repeatability” because there is only one LHC and the demands for its use far outstrips availability. 

Even if you successfully “exactly repeat” an experiment with this equipment, until someone builds another equivalent LHC, you cannot know for sure that there wasn’t some subtle fault in the experimental equipment.

The problem of “the closed system”

It is usually the case that a scientific hypothesis is expressed in terms of cause and effect, in the sense that a particular action in a particular situation causes a particular result. The problem in proving such a hypothesis is that the scientist needs to design an experiment that eliminates all extraneous influences. A closed system needs to be created which includes only the relevant components. However since the scientist cannot know everything that must be eliminated—since he is dealing to some extent with the unknown—it is difficult to be certain that an experimental design creates a genuinely closed system.

More to the point, the truth is that there are no fully closed systems. The only truly closed system in the universe is the universe itself and even that it a conceptual assumption. It is also worth noting that almost all the experiments that have been carried out since the dawn of science have been carried out on the planet Earth. 

All, including those carried out in orbit around Earth or in its vicinity, are proven only in this locality. If there is some influencing factor in this locality that does not generally apply throughout the universe, then the generality of all of science is in question. 

The statistical problem

Where scientists cannot create a closed system, they will attempt to verify a hypothesis statistically. If an experiment does not always provide the same result, but when repeated produces the result a statistically significant number of times, the hypothesis is deemed to be supported if not proven. Common examples of this are found in the field of medicine. 

When searching for the cause of a particular disease, epidemiologists will conduct experiments to try to identify the responsible pathogen. If you review such experiments, you will find that there is normally a control group of people in the locality under investigation, who show no symptoms of the disease. Their health is compared to a group suffering from the disease. If the pathogen is isolated, it will normally be found, by test, to exist in the bodies of most of the infected group—but not all of them. In the control group it will be found to be absent in the bodies of most, but not all. This is a strong sign that the identified pathogen is the cause.

You might protest the fact that the pathogen cannot be isolated in every one of the infected group, and that it can be found in one or two of the “uninfected” group. But the human body is a very complex system and there can be great variability from one such system to another. The few in the uninfected group, who show signs of the pathogen may have very robust immune systems and antibodies that can cope with the pathogen. On the other side of the line, those who showed no evidence of the pathogen, but had symptoms of the disease, may have been affected by undetectable levels of the pathogen.

In any event, with epidemiology, that is merely the beginning of the story. The next steps are to proceed from these results to identify how infection by the pathogen occurs (by contagion, by insect bite, etc.) and to find ways to prevent transmission. Where such campaigns are successful it is clear that the pathogen has been nailed.

The point is that the statistics only demonstrated a correlated association. Such associations do not prove causation at all, they only indicate the possibility of causation. Nevertheless, such statistical data is often imputed to demonstrate causation, even among scientists. The fault is not in statistics itself, but in its abuse. 

The book Spurious Correlations* presents many excellent and amusing examples of correlations that clearly have no direct relation to causation. They include:

Figures from 1999 to 2009 demonstrate a 99.79% correlation between US spending on science, space and technology and US suicides by hanging, strangulation and suffocation.

Figures from 1996 to 2008 demonstrate a 95.23% correlation between Math doctorates awarded in the US and the amount of uranium stored at US nuclear power plants.

Figures from 1999 to 2009 demonstrate a 95.45% correlation between US crude oil imports from Norway and US drivers killed in a collision with a railway train. 

At above 95%, all of these are very high correlations, demonstrating how slippery correlation can be in any scientific context. And yet, contemporary science cannot proceed without using statistical correlation. If a scientist can present high correlation along with a convincing explanation of why A causes B, the hypothesis is likely to be given credence. Contemporary science is obliged to walk this line.

That’s enough “reasons to doubt” for this article, but there is more to add later.