Wednesday, January 30, 2008

Computer Models Don't Always Work

Jack W. Dini
Livermore, CA

From: Plating & Surface Finishing, May 2005

What grade would you give someone who was correct 20 percent of the time? Not passing for sure. However, being right 20 percent of the time got some authors published in the prestigious journal Science. (1) They were trying to account for the decline in global temperatures from the end of World War II until the late 1970s. As an aside, in case you don’t remember, the 70s were the times we were supposedly headed for an ‘ice age.’ Newsweek highlighted this with an article titled, “The Cooling World.” (2) Anyhow, getting back to the present, it turns out that computer models have a difficult time producing cooling with the multitude of variables in the mix. The authors of the Science article, Delworth and Knutson, found that all they had to do was run their model many times, compare the output with observed temperature history, tweak some of the input, and go back for another run. After five such runs they concluded, “in one of the five GHG [greenhouse gases]-plus-sulfate integrations, the time series of global mean surface air temperature provides a remarkable match to the observed record, including the global warmings of both the early (1925-1944) and latter (1978 to the present) parts of the century. Further, the simulated spatial pattern of warming in the early 20th century is broadly similar to the observed pattern of warming.” (1)

In discussing this work, Robert Davis says the following, “Yes, it’s possible to get a model to reproduce anything you choose merely by tweaking a few parameters and running it enough times. But the model that reproduces the temperature history screws up precipitation, and the model that gets rainfall correct can’t generate the proper wind or pressure fields. The reason is actually quite plain: We don’t understand the physics of the atmosphere well enough to model climate change. That is the grim reality that at least four out of five climate models chose to ignore.”(3) John Christy adds: “Keep firmly in mind that models can’t prove anything. Even when a model generates values that appear to match the past 150 years, one must remember that modelers have had 20 years of practice to make the match look good. Is such model agreement due to fundamentally correct science or to lots of practice with altering (or tuning) the sets of rules in a situation where one knows what the answer should be ahead of time?” (4)

Science writer James Trefil echoes this thought. “After you’ve finished a model, you would like to check it out. The best validation is to apply the simulation to a situation where you already know the answer. You could, for example, feed in climate data from one hundred years ago and see if the GCM predicts the present climate. The fact that GCMs can’t do this is one reason I take their predictions with a grain of salt.” (5) A comparison of nearly all of most sophisticated climate models with actual measurements of current climate conditions found the models in error by about 100 percent in cloud cover, 50 percent in precipitation, and 30 percent in temperature change. Even the best models give temperature change results differing from each other by a factor of two or more. (6)




Reliability is in Question

While on the topic of global warming, which in a large part has been made a major scientific and political issue because of complex models, here are other examples of the poor predictability of some of those models:
•The models that served as the scientific background for the 1992 Rio Treaty implied that the world should have warmed 1.5 C since the late 19th century. In actuality, the world has warmed only 0.5 C, so the models were off by a factor of 3. (7)
•As computer simulations have become more sophisticated, projections of rising sea levels have become much smaller. A 25 foot increase predicted in 1980 fell to three feet by 1985 and then to one foot by 1995. (8)
•Computers forecast a warming of the troposphere of 0.224 C per decade, when actual measurements showed a warming of only 0.034 C per decade. Predictions were off by almost a factor of 7. (9)
•Computer models of ocean circulation did not predict temperature changes which occurred in the deep sea south of the Aleutian Islands. Keay Davidson observes; “At the very least, the findings indicate that computer models of ocean circulation—which are vital for monitoring climate change—are badly in need of a tune-up. The discovery was not explicitly predicted by any known computer models of ocean circulation.” (10)
•Carbon buildup has slowed during the past 10 years. Original predictions were that it would be up to 600 ppm by the year 2100, but that number has been reduced to only 500 ppm. (11)
•Atmospheric temperatures at the stratopause and mesopause regions (the atmospheric layers at about 30 and 50 miles altitude, respectively), at the Earth’s poles were found to be about 40-50 degrees F cooler than model predictions. (12)
•Jane Shaw reports that since “computers have to treat large areas of the earth as if they are on one elevation, their findings don’t give good descriptions of regions that may be hundreds of miles wide. Mountain ranges have an enormous impact on climate; their cooler air causes snow and rain to fall, drying out the air as it moves over the mountains. Yet most computer models do not distinguish mountain ranges from prairies. The building blocks for the models are not fine-grained enough; the mountains have to be flattened in the models and the valleys filled in. The predictions for the wet, mountainous forests of the Pacific Northwest are not much different than the predictions for the dry desert in Nevada. Because they are unable to make such distinctions, the climate descriptions may be distorted.(13) Here’s an example. Martin Wild and his colleagues recently proposed that melting over Greenland should remain negligible, even with doubled carbon dioxide.(14). Why the big difference from past assessments? The short answer is resolution as discussed above. Even the best models end up representing Greenland as a gently rounded mound rather than as a steep walled mesa. And, because melting takes place only as lower elevations, the area prone to melting gets exaggerated in the models.(15) So is Greenland really melting? Here’s some data that I bet you haven’t heard; the West Greenland Ice Sheet, the largest mass of polar ice in the Northern Hemisphere, has thickened by up to seven feet since 1980.(16)

Other Examples

Global warming isn’t the only situation where computer models exhibit shortcomings. The best model available at the time of the Chernobyl accident did not describe a major feature of the radioactivity deposition 80 miles northeast of the plant, and it was mostly in this region that children ingested or inhaled radioactive iodine and developed thyroid cancers.(17)

Predictions of the plume from the Kuwait oil fires (February 1991 to October 1991) were reasonably well described, but some individual deviations where air masses turned westward over Riyadh in Saudi Arabia were not well predicted even after the event.(17)

A program researchers were using for studying the effects of airborne soot on human health produced enormous results that went unchecked for years. A team in Canada estimates it will change its data on the impact of airborne soot on mortality downwards by 20-50%. Other groups throughout the world using the same tool are now redoing their calculations.(18)

Stuart Beaton and his colleagues note that an EPA model, which treats all cars of a given model year as having the same odometer reading, the same annual mileage accumulation, and an equal likelihood of emission control problems, has little success in predicting urban on-road vehicle emissions. This leads them to conclude, “lack of linkage between EPA’s model and real-world measurements leads to inappropriate policy decisions and wastes scarce resources. If we want to maintain public support for programs that claim to reduce air pollution, those programs must do what they claim in the real world, not just in the virtual world of the computer modeler.”(19)

Jerry Dennis reported this about the Great Lakes, “One recent computer model projected a period of drought and heat continuing through the twenty-first century, resulting in even lower water levels. Another predicted more heat and precipitation, resulting in the Great Lakes staying at the same level or even rising a foot or so above average.”(20) Take your pick.

A Sacrilegious Thought

Naomi Oreskes and her co-authors argue that large computer models with multiple inputs should probably never be considered ‘validated.’ They argue that verification and validation of models of natural systems is impossible because natural systems are never closed, and because models are always non-unique. “Models can only be evaluated in relative terms, and their predictive value is always open to question.”(21) They quote Nancy Cartwright who has said: “A model is a work of fiction.”(22)

While not necessarily accepting Cartwright’s viewpoint, Oreskes et al., compare a model to a novel. Some of it may ring true and some may not. “How much is based on observation and measurement of accessible phenomena, how much is based on informed judgment, and how much is convenience? Fundamentally, the reason for modeling is a lack of full access, either in time or space, to the phenomena of interest.”(21) It’s obvious that in some cases we still have a long way to go with modeling.

References

1.Thomas L. Delworth and Thomas R. Knutson, “Simulation of Early 20th Century Global Warming,” Science, 287, 2246, March 24, 2000
2.Peter Gwynne, “The Cooling World,” Newsweek, 85, 64, April 28, 1975
3.Robert E. Davis, “Playing the numbers with climate model accuracy,” Environment & Climate News, 3, 5, July 2000
4.John R. Christy, “The Global Warming Fiasco,” in Global Warming and Other Eco-Myths, Ronald Bailey, Editor, (Roseville, CA, Prima Publishing, 2002), 15
5.James Trefil, The Edge of the Unknown, (New York, Houghton Mifflin Company, 1996), 46
6.Jay Lehr and Richard S. Bennett, “Computer Models & The Need For More Research,” Environment & Climate News, 6, 12, July 2003
7.Robert W. Davis and David Legates, “How Reliable are Climate Models?,” Competitive Enterprise Institute, June 5, 1998
8.“The Global Warming Crisis: Predictions of Warming Continue to Drop,” in Facts on Global Warming, (Washington, DC, George C. Marshall Institute, October 15, 1997)
9.TSAugust, www.tsaugust.org/Global%20Warming.htm, accessed January 19, 2004
10.Keay Davidson, “Going to depths for evidence of global warming,” San Francisco Chronicle, A4, March 1, 2004
11.Jane S. Shaw, Global Warming, (New York, Greenhaven Press, 2002), 23
12.C. S. Gardner, et al., “The temperature structure of the winter atmosphere at the South Pole,” Geophysical Research Letters, Issue 16, Citation 1802, August 28, 2002
13.Jane S. Shaw, Global Warming, 60
14.Martin Wild, et al., “Effects of polar ice sheets on global sea level in high-resolution greenhouse scenarios,” Journal of Geophysical Research, 108, No. D5, 4165,2003
15.David Schneider, “Greenland or Whiteland?,” American Scientist, 91, 406, September-October 2003
16.David Gorack, “Glacier melting: Just a drop in the bucket,” Environment & Climate News, 2, 6, May 1999
17.Richard Wilson and Edmund A. C. Crouch, Risk-Benefit Analysis, Second Edition, (Cambridge, Harvard University Press, 2001), 74
18.Jonathan Knight, “Statistical error leaves pollution data up in the air,” Nature, 417, 677, June 13, 2002
19.Stuart P. Beaton, et al., “On-Road Vehicle Emissions: Regulations, Costs and Benefits,” Science, 268, 991, May 19, 1995
20.Jerry Dennis, The Living Great Lakes, (New York, St. Martin’s Press, 2003), 137
21.Naomi Oreskes et al., “Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences,” Science, 263, 641, February 4, 1994
22.Nancy Cartwright, How the Laws of Physics Lie, (Oxford, Oxford University Press, 1983), 153

Thursday, January 24, 2008

Water Can Be Too Pure

Jack W. Dini, Livermore, CA

(This appeared in Hawaii Reporter, January 24, 2007)

Can water be too pure? If you’re a farmer the answer is yes. Desalinated water is one example. The purity drawback is that desalination not only separates the undesirable salts from the water, but also removes ions that are essential to plant growth. When desalinized water is used to replace irrigation water, basic nutrients like calcium, magnesium, and sulfate at levels sufficient to preclude additional fertilization of these elements is missing.

An example is a new facility in Ashkelon, on Israel’s southern Mediterranean coast. Although the Ashkelon facility was designed to provide water for human consumption, because of relatively modest population densities in southern Israel, a substantial percentage of the desalinated water was delivered to farmers. Recent evaluation of the effect of the plant’s desalinized water on agriculture, however, produced some surprising, negative results. Water from the Ashkelon plant has no magnesium, whereas typical Israel water has 20 to 20 mg/liter of magnesium. After farmers used the desalinated water, magnesium deficiency symptoms appeared in crops, including tomatoes, basil, and flowers, and had to be remedied by fertilization. To meet agricultural needs, missing nutrients might be added to desalinized water in the form of fertilizers, adding additional costs. If the minerals required for agriculture are not added at the desalination plant, farmers will need sophisticated independent control systems in order to cope with the variable water quality. (1)

Farmers can also be affected by run-off water that is too pure. Snow-melt run-off from the Sierra Nevada, Cascades, or other mountains can be too pure. For irrigation to be effective it needs to penetrate into the soil supplying enough water to sustain the crops until the next irrigation and the most important factor for water penetration is salts (or lack thereof) present in the water and/or soil. A lack of calcium in the majority of soils due to snow-melt irrigation water, or poor quality subsurface water, is leading to serious problems in California. Danyal Kasapligil, agronomist in Fresno, CA, reports, “What we are seeing in the field is, not only are there more and more water penetration problems, but crop quality is also rapidly declining because of a lack of calcium in our irrigation water.” (2) Brent Rouppet adds that for irrigation water to penetrate deeply into the soil, the electrical conductivity of the water needs to be greater than approximately 0.60 dS/m (decisiemens per meter). Irrigation water with less than 0.60 dS/m conductivity contributes to loss of soil structure and increased water penetration problems. The snow-melt run-off from the Sierra Nevada Mountains is so pure that its electrical conductivity can be 0.02 dS/m, or less. This water lacks calcium, essential for good soil structure, and any calcium existing in the soil profile is over time leached below the root zone or used by the crops and is typically not being replaced in quantities required. (2)

Here’s another example where absolute purity of water can be a problem. Philip West of Louisiana State University notes, “With productive waters, it is quite apparent that absolute purity is out of the question. If the Mississippi River passing Baton Rouge and New Orleans consisted of distilled water there would be no seafood industry such as we now have in Louisiana. With copper ‘contaminating’ the water there would be no oysters. Traces of iron, manganese, cobalt, copper, and zinc are essential for the crabs, snapper, flounder, shrimp and other creatures that abound in Gulf waters. As unpleasant as it sounds, even the run-off from the fertilized fields of the heartland’s and the sewage discharges into the Missouri, Ohio, and Mississippi River systems pollute and thus ultimately nourish the waters.” (3)

One last item. Are you a bottled water fan? If so, you could be giving up a primary source of fluoride which is the public health system’s main weapon against tooth decay. What comes in the bottle has either been filtered to remove impurities or is spring water that is reputed to purer than tap water. But the filtering process also takes out fluoride. Not only does fluoride occur naturally in water, but about half the nation’s public water supplies are supplemented with additional fluoride. The recommended level of fluoride set by the EPA for municipal water systems is 0.7 to 1.2 parts per million (ppm). The maximum acceptable level is 4 ppm. If a water supply contains less than 0.7 ppm of fluoride, dentists recommend the use of a fluoride supplement, in tablets or liquid, from birth unto the later teen-age years. (4)

When researchers in Ohio sampled more than 50 brands of bottled water for fluoride content, they found that 90 percent of them had levels below the recommended range for dental health. (5) In South Australia, a study found a 71 percent rise in tooth decay in children which was attributed to the lack of enamel strengthening fluoride in the bottled water that has become so popular in the area. (6)

References

1. U. Yermiyahu et al., “Rethinking Desalinated Water Quality and Agriculture,” Science, 318, 920, November 9, 2007
2.Brent Rouppet, “Irrigation Water: A Correlation to Soil Structure and Crop Quality?” Crops, August 2006, Page 22
3.Raphael G. Kazmann, in Rational Readings on Environmental Concerns, Jay H. Lehr, Editor, (New York, Van Nostrand Reinhold, 1992), 311
4.Marian Burros, “Eating Well; Bottled Water: Is It Too Pure?” nytimes.com, November 22, 1989
5.“Fluoride Alert,” Runner’s World, 35, 32, July 2000
6. Verity Edwards, “Bottled water a dental disaster,” Australiannews.com, August 2, 2006

Tuesday, January 22, 2008

Scurvy and Paprika

Jack W. Dini
Livermore, California

What famous scientist do you think of when one mentions vitamin C? My guess is that it’s Linus Pauling because of his much publicized efforts at labeling vitamin C as a remedy for cancer and the common cold. The early history of vitamin C reveals that there were some other good scientists and interesting events that led to the understanding of this vital ingredient.

If you travel to Hungary, one item you might bring home with you is some paprika. At least that’s what a lot of folks did on my recent trip to that country. Hungary is famous for its paprika. What’s all this have to do with vitamin C? Think scurvy. Joe Schwarcz reports, “The first vitamin-deficiency disease to be recognized was scurvy, described as early as 1550 BC by the Egyptians in the Ebers Papyrus. In the sixteenth and seventeenth centuries, when long ocean voyages became common, thousands of sailors died from scurvy, which is characterized by spongy gums, loose teeth, and bleeding into the skin and mucous membranes. The first clue that scurvy was a diet related disease came from North American Indians who showed French explorer Jacques Cartier that a brew made for pine needles could cure the condition.” (1)

James Lind, a Scottish physician, had been inspired to work on this issue when he heard about a British Navy expedition that had gone terribly wrong. James Burke provides the story, “In 1740 Captain George Anson had sailed from England with six ships and over a thousand men. His mission: to head for the Pacific and clobber the Spanish wherever he found them. He did so, in spades, attacking Spanish ports and ships, laying waste right and left in the usual manner, and coming home four years later with so much treasure it took thirty wagons to haul it from the docks to the Tower of London for safekeeping. Every crew member walked off Anson’s ship rich for life. There was a lot more booty than originally planned for each man to share because, of the original six ships and one thousand crew, only one ship with 145 men made it back. Scurvy had killed the rest.” (2)

Lind proceeded to carry out what was probably the first properly controlled trial in the history of clinical nutrition. For fourteen days he kept six pairs of scurvy patients on the same diet, but gave each pair a different medicine: cider, elixir vitriol, vinegar, seawater, a ‘medicinal paste’ or oranges and lemons. The citrus fruit was the cure and in 1753 Lind published A Treatise of the Scurvy. (2) It took another 50 years but the British Navy finally got around to requiring sailing vessels to carry supplies of lemons or limes. Besides solving the scurvy problem, this led to the slang term ‘Limeys’ for British sailors. (1) The upside for the British besides the saving of many lives is that according to historians, many a naval victory claimed by the ‘Limeys’ resulted because these sailors, unlike their enemies, were protected form scurvy. (3)

So what was the magic ingredient in the citrus fruits? This gets us back to paprika since it played a key role in helping isolate the anti-scurvy factor found in citrus fruits. Albert Szent-Gyorgyi was a Hungarian physician studying plant chemistry around 1925. He noted a similarity between the darkening of damaged fruit and skin discoloration in patients suffering from Addison’s disease, an adrenal gland disorder. He had observed that certain fruits like oranges did not turn brown and their juice prevented others from discoloring. He isolated the substance that prevented browning and suggested the name “Godnose” for it. This wasn’t accepted very well so he changed the name to hexuronic acid. Szent-Gyorgyi wanted to do more research on this material but he needed large amounts of it. Along the way he accepted a university position in Szeged, which is the paprika capital of the world. As Joe Schwarcz reports, “To live in Szeged is to be surrounded by the sights and smells of paprika. Szent-Gyorgyi couldn’t help but wonder if paprika, like oranges and limes, might also contain his hexuronic acid. Did it ever! (3) (Fresh red peppers have more than seven times as much vitamin C as oranges, but the very high heat of drying destroys much of its vitamin C.)(4) Within a short time, Szent-Gyorgyi had isolated a kilogram of the stuff and determined that it was identical to the anti-scurvy factor found in citrus fruits. He rechristened it ‘ascorbic acid.’ Today we know it as vitamin C. Why name it vitamin C? Because the practice of naming vitamins by letter had been introduced some twenty years earlier and A and B were already taken. (3)

Some last words on Szent-Gyorgyi. He left an impressive legacy as a highly admired biochemist, winning the Nobel Prize in Physiology and Medicine for his biological combustion discoveries. He is credited with saying. “Very often, when you look for one thing, you find something else.” (5)

References

1. Joe Schwarcz, The Fly in the Ointment, (Toronto, ECW Press, 2004), 122

2. James Burke, Circles, (New York, Simon & Schuster, 2000), 235

3. Joe Schwarcz, That’s the Way the Cookie Crumbles, (Toronto, ECW Press, 2002)

4. “Paprika”, http://www.theepicentre.com/Spices/paprika/html

5. “Know Your Strengths”, http:www.dailycelebrations.com/063001.htm

Many Weather Stations in the US Yield Incorrect Data

Jack Dini
Livermore, California

(This appeared in Hawaii Reporter, December 3, 2007)

Imagine if you were tasked with measuring and tracking the global average per capita income. Then imagine if one year, hundreds of your offices including many in Africa shut down, and so you would simply get no information from this region. Would you be surprised if you added up all your numbers that year and suddenly your ‘average per capita income’ was higher? Would you consider that data reliable? Christopher Horner asks, “Would you expect some media skepticism if suddenly people read your numbers and declared that the word was getting much richer?”

When an analogous course of events unfolded in the world of climate science, the skepticism was notably absent. When the Soviet Union was falling apart from 1989 to 1992 folks there didn’t much care about keeping temperature monitoring stations. Thousands were closed and it’s important to note that many of these were in cold regions. Others around the world closed at the same time. Could this have helped making the decade that followed the ‘hottest decade’ ever?

Have you read about this in the media? I doubt it, although the inadequacy of sampling has not gone unnoticed. A 1997 conference on World Climate Research resulted in the statement in a resulting book Adequacy of Climate Observing Systems, “Without action to reverse this decline and develop the Global Climate Observation System, the ability to characterize climate change and variation over the next 25 years will be even less than during the past century.” It also mentioned that “The climate research community relies on a number of disparate observation systems to assemble a data base that it uses to analyze climate variability and change. A few of these systems function well, but for the most, there are clear warning signals that much be heeded if climate variability and change is to be observed with sufficient fidelity over the next decade.” Today, more than ten years after these observations, it’s not clear much has changed.

Enter Anthony Watts, a northern California meteorologist who is garnering national attention for his project of checking the condition and placement of weather stations used to monitor the nation’s climate. To date, Watts and his volunteers have found and photographed over 500 of the 1221 stations. This information is available on Watts’ site, surfacestations.org. The concern is that objects near a station affect what thermometers record. Buildings, parking lots, air conditioners, and sewage treatment plants near weather stations may emit heat and ultimately skew readings. Photos show some stations placed in parking lots near cars, on rooftops, next to diesel generators, and at non-standard heights. Clearly, many are far from meeting the guidelines to qualify as properly maintained temperature stations.

Others besides Watts are concerned about locations of weather stations. One is Roger Pielke Sr., a highly published geologist. Pielke and his colleagues reported in a recent paper, “the use of temperature data from poorly sited stations can lead to a false sense of confidence in the robustness of multidecaddal surface air temperature trend assessments.” They concluded that there are large uncertainties associated with surface temperature trends from the poorly sited stations.

A last note on this topic- Assume that there was evidence that some weather stations around the country were underestimating temperatures. Noel Sheppard asks, “Would a media fixated on expanding climate change alarmism investigate and report this phenomenon to demonstrate that the planet was actually warmer than people think? ‘60 Minutes,’ ‘Dateline’ and others would have all done rather lengthy exposes into the matter, correct?”