16 April, 2011

What Has Happened to Scientific Method?

Available at Quadrant Online is “Science without Method”, by John Nicol:
Global warming, and its euphemistic sibling “climate change”, remain much in the news.  Specialist research groups around the world continue to produce an unending sequence of papers aimed at demonstrating a litany of problems which might arise should global warming resume.  The authors’ prime expertise is often found to be not in atmospheric physics or aeronomy, as one might have anticipated.  However, the topic of climate change itself provides for abundant research funding, from which they feed, more easily than other areas of research of greater interest and practical use.  Most of these papers are, of course, based upon the output from speculative and largely experimental, atmospheric models representing exercises in virtual reality, rather than observed, real-world, measurements and phenomena.  Which leads to the question “What scientific methodology is in operation here?”
Though much has been written concerning the scientific method, and the ill-defined question as to what constitutes a correct scientific approach to a complex problem, comparatively little comment has been made about the strange mix of empirical and virtual reality reasoning that characterises contemporary climate change research.  It is obvious that the many different disciplines described as being scientific, rather than social, economic, or of the arts, may apply somewhat different criteria to determine what fundamental processes should define the “scientific method” as applied for each discipline.  Dismayingly, for many years now there has been a growing tendency for many formerly “pure” scientific disciplines to embody characteristics of many others, and in some cases that includes the adoption of research attitudes and methods that are more appropriately applied in the arts and social sciences.  “Post-modernism”, if you like, has proved to be a contagious disease in academia generally.

Classical scientific method generally follows the simple protocol of first defining an hypothesis concerning the behaviour or cause of some phenomenon in nature, either physical, biological or chemical.  In most well defined areas of research, previous theory and experiment may provide such a wide and complex corpus of knowledge that a new hypothesis is not easily nor singly defined, and may even be left unstated.
This is most commonly the case when a number of diverse disciplines, all important for attaining an understanding of a particular problem, are providing results which lead to contradicting conclusions.  A contemporary example of this is discussions of the greenhouse effect, which is one of the most controversial topics ever to be considered within the scientific community.  Conventional thinking on the greenhouse effect is encapsulated in the IPCC’s statement that “We believe that most of the increase in global temperatures during the second half of the twentieth century, were very likely due to the increases in the concentration of atmospheric carbon dioxide”.
Clearly this statement would be better worded were it to have been framed as a hypothesis rather than a belief, and treating the statement that way allows it to be rigorously tested (“beliefs”, which are unable to be tested, fall outside of the spectrum of science).  In the real scientific world, for such an hypothesis to survive rigorous scrutiny, and thereby to perhaps grow in strength from a hypothesis to a theory, requires that it be examined and re-examined from every possible angle over periods of decades and longer.
In conventional research, the next step—following the formulation of the hypothesis in whatever form it may take—is to select what measurements or analyses need to be done in order to test the hypothesis and thus to advance understanding of the topic.  Most often, theoretical reasoning as to why an hypothesis might be correct or incorrect is followed by the development of experiments in laboratories, or the making of careful observations in nature, which can be organised and classified, and from which measurements can be made and conclusions drawn.  [...]
Out of [a] cut and paste “history” of physics, comes the strongest criticism of the mainstream climate science research as it is carried on today.  The understanding of the climate may appear simple compared to quantum theory, since the computer models that lie at the heart of the IPCC’s warming alarmism don’t need to go beyond Newtonian Mechanics.  [...]  Yet in contemporary research on matters to do with climate change, and despite enormous expenditure, not one serious attempt has been made to check the veracity of the numerous assumptions involved in greenhouse theory by actual experimentation.
The one modern, definitive experiment, the search for the signature of the green house effect has failed totally.  [...]
In addition, the data representing the earth’s effective temperature over the past 150 years, show that a global human contribution to this temperature can not be distinguished or isolated at a measurable level above that induced by clearly observed and understood, natural effects, such as the partially cyclical, redistribution of surface energy in the El Niño.  [...]
So how do our IPCC scientists deal with this?  Do they revise the theory to suit the experimental result, for example by reducing the climate sensitivity assumed in their GCMs?  Do they carry out different experiments (i.e., collect new and different datasets) which might give more or better information?  Do they go back to basics in preparing a new model altogether, or considering statistical models more carefully?  Do they look at possible solar influences instead of carbon dioxide?  Do they allow the likelihood that papers by persons like Svensmark, Spencer, Lindzen, Soon, Shaviv, Scafetta and McLean (to name just a few of the well-credentialed scientists who are currently searching for alternatives to the moribund IPCC global warming hypothesis) might be providing new insights into the causes of contemporary climate change?
Unfortunately, the meretricious Ross Garnaut’s Garnaut Climate Change Review—in order, surely, to reach its premeditated, predetermined conclusion—relies on governmentally-approved, “settled science”, a fetish for peer-reviewed papers (but, of course, from only one sub-set of officially approved papers), and the fallacious argumentum ad vercundiam:
The Review took as its starting point:
... on the balance of probabilities and not as a matter of belief, the majority opinion of the Australian and international scientific communities that human activities resulted in substantial global warming from the mid-20th century (Garnaut 2008).
Also underpinning the Review was the knowledge from the majority science, that continued growth in greenhouse gas concentrations caused by human-induced emissions would generate high risks of dangerous climate change.  [...]
The Review drew extensively on the Fourth Assessment Report (AR4) of the Intergovernmental Panel on Climate Change (IPCC) published in 2007.  The IPCC Assessment Reports are a consolidation of all the peer-reviewed science on climate change, its impacts, and mitigation.[*]  They represent the research and input of thousands of scientists and are the authoritative point of reference on climate change.
The same IPCC reports which derived claims that ice is disappearing from the world’s mountain tops from a student’s dissertation and an article in a mountaineering magazine; which based claims that the Amazonian rainforest is remarkably vulnerable on “anonymous propaganda published on the website of a small Brazilian environmental advocacy group”; which first asserted that the Himalayan glaciers could melt away by 2035, but then admitted that the claim was completely unfounded; etc.

No comments: