Danger of Deploying Weapon and Nuclear Power in Space

Author : Prashant R Dahat

A nuclear weapon is a weapon which derives its destructive force from nuclear reactions of fission or fusion. As a result, even a nuclear weapon with a relatively small yield is significantly more powerful than the largest conventional explosives, and a single weapon is capable of destroying an entire city. In the history of warfare, nuclear weapons have been used twice, both during the closing days of World War II. The first event occurred on the morning of August 6, 1945, when the United States dropped uranium gun-type device code-named “Little Boy” on the Japanese city of Hiroshima.

The second event occurred three days later when a plutonium implosion-type device code-named “Fat Man” was dropped on the city of Nagasaki. The use of these weapons, which resulted in the immediate deaths of around 100,000 to 200,000 individuals and even more over time, was and remains controversial — critics charged that they were unnecessary acts of mass killing, while others claimed that they ultimately reduced casualties on both sides by hastening the end of the war. This topic has seen increased debate recently in the wake of increased terrorism involving killings of civilians by both state and non-state players, with parties claiming that the end justifies the means for a full discussion.

Since the Hiroshima and Nagasaki bombings, nuclear weapons have been detonated on over two thousand occasions for testing and demonstration purposes. The only countries known to have detonated such weapons are the United States, Soviet Union, United Kingdom, France, and People’s Republic of China, India, and Pakistan. These countries are the declared nuclear powers (with Russia inheriting the weapons of the Soviet Union after its collapse).
Various other countries may hold nuclear weapons but have never publicly admitted possession, or their claims to possession have not been verified. For example, Israel has modern airborne delivery systems and appears to have an extensive nuclear program with hundreds of warheads, though it officially maintains a policy of “ambiguity” with respect to its actual possession of nuclear weapons. North Korea has recently stated that it has nuclear capabilities (although it has made several changing statements about the abandonment of its nuclear weapons programs, often dependent on the political climate at the time) but has never conducted a confirmed test and its weapons status remains unclear.
Iran currently stands accused by a number of governments of attempting to develop nuclear capabilities, though its government claims that it’s acknowledged nuclear activities, such as uranium enrichment, are for peaceful purposes. South Africa also secretly developed a small nuclear arsenal, but disassembled it in the early 1990s. Apart from their use as weapons; nuclear explosives have been tested and used for various non-military uses.
The first nuclear weapons were created in the United States, by an international team including many displaced emigrant scientists from central Europe with assistance from the United Kingdom and Canada, during World War II as part of the top-secret Manhattan Project While the first weapons were developed primarily out of fear that Nazi Germany would develop them first, they were eventually used against the Japanese cities of Hiroshima and Nagasaki in August 1945.
The Soviet Union developed and tested their first nuclear weapon in 1949, based partially on information obtained from Soviet espionage in the United States. Both the U.S. and USSR would go on to develop weapons powered by nuclear fusion (hydrogen bombs) by the mid-1950s. With the invention of reliable rocketry during the 1960s, it became possible for nuclear weapons to be delivered anywhere in the world on a very short notice, and the two Cold War superpowers adopted a strategy of deterrence to maintain a shaky peace.
Nuclear weapons were symbols of military and national power, and nuclear testing was often used both to test new designs as well as to send political messages. Other nations also developed nuclear weapons during this time, including the United Kingdom, France, and China. These five members of the “nuclear club” agreed to attempt to limit the spread of nuclear proliferation to other nations, though at least three other countries (India, South Africa, Pakistan, and most likely Israel) developed nuclear arms during this time. At the end of the Cold War in the early 1990s, the Russian Federation inherited the weapons of the former USSR, and along with the U.S., pledged to reduce their stockpile for increased international safety. Nuclear proliferation has continued, though, with Pakistan testing their first weapons in 1998, and North Korea claiming to have developed nuclear weapons in 2004. In January 2005, Pakistani metallurgist Abdul Qadeer Khan confessed to selling nuclear technology and information of nuclear weapons to Iran, Libya, and North Korea in a massive, international proliferation ring.
Nuclear weapons have been at the heart of many national and international political disputes and have played a major part in popular culture since their dramatic public debut in the 1940s and have usually symbolized the ultimate ability of mankind to utilize the strength of nature for destruction. There have been (at least) four major false alarms, the most recent in 1995, that almost resulted in the U.S. or USSR/Russia launching its weapons in retaliation for a supposed attack.
Additionally, during the Cold War the U.S. and USSR came close to nuclear warfare several times, most notably during the Cuban Missile Crisis. As of 2005, there are estimated to be at least 29,000 nuclear weapons held by at least seven countries, 96 percent of them in the possession of the United States and Russia.
The environmental destruction that certain scientists contend would probably result from the hundreds of nuclear explosions in a nuclear war. The damaging effects of the light, heat, blast, and radiation caused by nuclear explosions had long been known to scientists, but such explosions’ indirect effects on the environment remained largely ignored for decades. In the 1970s, however, several studies posited that the layer of ozone in the stratosphere that shields living things from much of the Sun’s harmful ultraviolet radiation might be depleted by the large amounts of nitrogen oxides produced by nuclear explosions.
Further studies speculated that large amounts of dust kicked up into the atmosphere by nuclear explosions might block sunlight from reaching the Earth’s surface, leading to a temporary cooling of the air. Scientists then began to take into account the smoke produced by vast forests set ablaze by nuclear fireballs, and in 1983 an ambitious study, known as the TTAPS study (from the initials of the last names of its authors, R.P. Turco, O.B. Toon, T.P. Ackerman, J.B. Pollack, and Carl Sagan), took into consideration the crucial factor of smoke and soot arising from the burning petroleum fuels and plastics in nuclear-devastated cities. (Smoke from such materials absorbs sunlight much more effectively than smoke from burning wood.) The TTAPS study coined the term “nuclear winter,” and its ominous hypotheses about the environmental effects of a nuclear war came under intensive study by both the American and Soviet scientific communities.
The basic cause of nuclear winter, as hypothesized by researchers, would be the numerous and immense fireballs caused by exploding nuclear warheads. These fireballs would ignite huge uncontrolled fires (firestorms) over any and all cities and forests that were within range of them. Great plumes of smoke, soot, and dust would be sent aloft from these fires, lifted by their own heating to high altitudes where they could drift for weeks before dropping back or being washed out of the atmosphere onto the ground. Several hundred million tons of this smoke and soot would be shepherded by strong west-to-east winds until they would form a uniform belt of particles encircling the Northern Hemisphere from 30° to 60° latitude. These thick black clouds could block out all but a fraction of the Sun’s light for a period as long as several weeks. Surface temperatures would plunge for a few weeks as a consequence, perhaps by as much as 11° to 22° C (20° to 40° F).
The conditions of semidarkness, killing frosts, and subfreezing temperatures, combined with high doses of radiation from nuclear fallout, would interrupt plant photosynthesis and could thus destroy much of the Earth’s vegetation and animal life. The extreme cold, high radiation levels, and the widespread destruction of industrial, medical, and transportation infrastructures along with food supplies and crops would trigger a massive death toll from starvation, exposure, and disease. A nuclear war could thus reduce the Earth’s human population to a fraction of its previous numbers. A number of scientists have disputed the results of the original calculations, and, though such a nuclear war would undoubtedly be devastating, the degree of damage to life on Earth remains controversial.
Three distinct atmospheric problems have been debated intensely since about the mid-1970s, though two of them are quite old issues: the possible reduction of stratospheric ozone from chemical emissions; the generation of acid rain; and climatic change stemming from the greenhouse effect. What these three problems have in common is quite simple: they all
(1) are complex and punctuated by large uncertainties,
(2) could be long-lasting,
(3) transcend state and even national boundaries,
(4) may be difficult to reverse,
(5) are inadvertent by-products of widely supported economic activities, and
(6) may require substantial investments of present resources to hedge against the prospect of large future environmental changes.
Ozone depletion Of these problems, the only one to have received any substantial public policy action is that centering on the reduction of stratospheric ozone. Ironically, it is perhaps the easiest of the problems to reverse. The importance of the stratospheric ozone layer in shielding the Earth’s surface from the harmful effects of solar ultraviolet radiation has been recognized for several decades. It was not until the early 1970s, however, that scientists began actually to grapple with the fact that even relatively small decreases in the stratospheric ozone concentration can have a serious impact on human health—an increased incidence of skin cancer, particularly among fair skinned peoples.
Plans in the United States, Great Britain, and France to build a commercial fleet of supersonic aircraft triggered much heated discussion over the potential reduction of the ozone layer by the exhaust gases (e.g., nitric oxide) emitted by such high-altitude planes. The debate in turn stimulated intensive scientific research on the stratosphere, which resulted in new findings and new concerns.
By the mid-1970s, various U.S. investigators had determined that chlorofluorocarbons (CFCs), widely employed as propellants in aerosol spray cans, could reduce the amount of stratospheric ozone significantly. A temporary ban was imposed on the use of certain CFCs in the United States, but only after much emotional debate among environmental and industrial scientists, reports by the National Academy of Sciences, and the development by industry of economically viable substitutes for spray-can propellants. (For a more detailed discussion of this issue, see atmosphere: Depletion of stratospheric ozone.)
Acid rain Acid precipitation has been known for centuries in locales such as London where sulfur discharged by the burning of coal produces toxic smog’s; however, the problem did not assume scientific, economic, and political prominence until the early 1980s. As it transcends national boundaries, the acid rain problem has become a subject of heated controversy between otherwise friendly neighbors like the United States and Canada or Germany and the Scandinavian countries.
Scientific studies have shown that the process that results in the formation of acid rain generally begins with the discharge of sulfur dioxide and nitrogen oxides into the atmosphere. These waste gases are released by the combustion of fossil fuels by automobiles, electric power plants, and smelting and refining facilities. They also are emitted by some biological processes. The gases combine with atmospheric water vapor to form sulfuric and nitric acids. When rain or some other form of precipitation falls to the surface, it is highly acidic, frequently with a pH value of less than 4. (The term pH is defined as the negative logarithm of the hydrogen on concentration in kilograms per cubic meter. The pH scale ranges from 0 to 14, with lower numbers representing greater acidit).
The consequent acidification of surface and subsurface waters is widely believed to have a detrimental effect on the ecology of the affected areas. Such regions as the Canadian Shield in Quebec and the Adirondack Mountains in New York are especially susceptible to contamination, because the snow pack buildup in winter allows a deadly pulse of acidic melt water to occur during spring. As highly acidic water is toxic to many aquatic organisms, many lakes in these regions are biologically damaged. It also has been found that acid precipitation is harmful to trees and other forms of vegetation, causing follicular injury and reduction in growth (see also atmosphere: Acid rain and allied problems).
The “cause-and-effect linkages” of the acid rain problem have been more clearly demonstrated in scientific terms than those related to ozone depletion; yet, the former has received much less direct policy action. The primary reason is the potential economic impact of efforts to remedy the problem—i.e., the enormous expenditure that would be required to control the emission of sulfur compounds from power plants, refineries, and facilities of other smokestack industries. There continues to be loud and angry debate as to the environmental and economic benefits of corrective action, particularly since the relationship between the discharge of potentially acidic compounds and the ultimate delivery of acid precipitation to specific geographic areas is not straightforward.
Greenhouse effect induced by carbon dioxide and other trace gases Finally, the most long-lasting and potentially least reversible global problem is the greenhouse effect. As noted above, this effect is induced by carbon dioxide, chlorofluorocarbons, methane, and more than a dozen other gases in concentration in the atmosphere. The role played by carbon dioxide is the most significant. The amount of CO2 in the atmosphere has risen steadily since the mid-1800s largely as a result of the combustion of coal, oil, and natural gas on an ever-widening scale. In 1850 the global CO2 level of the atmosphere was roughly 280 parts per million, whereas by the late 1980s it had increased to approximately 350 parts per million.
Should present trends in the emission of greenhouse gases, particularly of CO2, continue beyond another 100 years, climatic changes larger than any ever experienced during recent geologic periods can be expected. This could substantially alter natural and agricultural ecosystems, human and animal health, and the distribution of climatic resources. In addition, any significant greenhouse warming could cause a rapid melting of some polar ice, resulting in a rise in sea level and the consequent flooding of coastal areas. In spite of these long-term possibilities, the greenhouse problem has received the least policy-oriented attention of any of the three major issues at hand. There are various reasons for this:
(1) the problem is fraught with technical uncertainties.
(2) It has perceived “winners” and “losers”—economic and otherwise.
(3) No one nation acting alone can do much to counteract the CO2 buildup in the atmosphere.
(4) Dealing with the problem substantively could be expensive and even alter life-styles.
(5) There is no way of proving the validity of the greenhouse theory to everyone’s satisfaction except by “performing the experiment” on the real climatic system, which would necessarily involve all living things on Earth.
(6) The principal greenhouse gas, CO2, is an inherent by-product of the utilization of a commodity that is most fundamental to the economic viability of the world—fossil-fuel energy. (This fact more than any other explains why the greenhouse problem is so difficult to solve.)
It seems appropriate to break down the issue of greenhouse warming into a series of stages and then consider how policy questions might be addressed against the background of these more technical stages. The present discussion will deal with the problem specifically as it relates to increasing atmospheric CO2 for the sake of simplicity, though other related questions certainly can be dealt with in the same manner.
Behavioral assumptions At the very basis of the problem is the need to make behavioral assumptions about the future use of fossil fuels (or alternatively of the projected extent of deforestation, because this, too, can affect the amount of carbon dioxide in the atmosphere; see above). In essence, this issue has to do with social science rather than with chemistry, physics, or biology. It depends on projections of human population, the per-capita consumption of fossil fuel, deforestation rates, reforestation activities, and perhaps even countermeasures for dealing with the additional CO2.
These projections, of course, are contingent on such questions as the likelihood of alternative energy systems or conservation measures becoming available, their costs, and their acceptability to society. Furthermore, trade in carbon-based fuels will be determined not only by energy requirements and available alternatives but also by the economic health of potential importing nations. This, in turn, will depend on whether those nations have adequate capital resources to spend on energy, rather than on other precious strategic commodities such as food or fertilizer or even weaponry. The future can be predicted by drawing up different projected CO2 concentrations based on assumed rates of growth in the use of fossil fuels.
Most typical projections are in the 1–2 percent annual growth range, implying doubling of atmospheric CO2 by the middle of the 21st century. It has already increased by some 25 percent in the 20th century. Carbon cycle Once the plausible scenarios for CO2 buildup have been devised, it is necessary to determine exactly which interacting biogeochemical processes control the global distribution of carbon and its stocks. This involves uptake by green plants (since CO2 is the basis of photosynthesis, more CO2in the air means faster rates of photosynthesis), changes in the amount of forested area, the types of vegetation planted, and the way in which climatic change affects natural ecosystems. The growth rate of photo synthesizers, such as grain plants and trees, may well increase. On the other hand, weeds and vegetation that harbor disease-bearing insects would also become more vigorous.
Moreover, since there is a slow removal of CO2 from the atmosphere, largely accomplished through chemical processes in the ocean that take from decades to centuries, the rates at which climatic change modifies mixing processes in the ocean also need to be taken into account. There is considerable uncertainty over just how much CO2 will remain in the air, but most present estimates put the so-called airborne fraction at about 50 percent, which suggests that, over the time frame of a century or two at least, something like half the CO2 injected into the atmosphere will remain and exacerbate the greenhouse effect.
Once the amount of carbon dioxide that may exist in the atmosphere over the next century or so has been projected, its significance in terms of climate has to be estimated. The greenhouse effect, notwithstanding all of the controversy that surrounds the term, is not a scientifically controversial subject. In fact, it is one of the best, most well-established theories in the atmospheric sciences. For example, with its extremely dense atmosphere composed largely of CO2, Venus has very high surface temperatures (up to about 500° C).
By contrast, Mars, with its very thin CO2 atmosphere, has temperatures comparable to those that prevail at the Earth’s poles in winter. The explanation for the Venus hothouse and the Martian deep freeze is really quite clear—the greenhouse effect. This mechanism works because some gases and particles in a planet’s atmosphere preferentially allow sunlight to filter through to the surface of the planet relative to the amount of radiant energy that the atmosphere allows to escape back to space. This latter kind of energy (infrared energy) is affected by the amount of greenhouse material in the atmosphere. Therefore, increasing the amount of greenhouse gases raises the surface temperature of the planet by increasing the amount of heat that is trapped in the lowest part of its atmosphere.
While that part of the subject is not controversial, what is open to debate is exactly how much the Earth’s surface temperature will rise given a certain increase in a trace greenhouse gas such as CO2. Complications arise due to processes known as feedback mechanisms. For example, if the CO2 added to the atmosphere were to cause a given temperature increase on Earth, warming would melt some of the snow and ice that now exist. Thus, the white surface, originally covered by the melted snow and ice, would be replaced with darker Blue Ocean or brown soil, surface conditions that would absorb more sunlight than the snow and ice. Consequently, the initial warming would create a darker planet that absorbs more solar energy and thereby produces greater warming in the end. This is only one of a number of possible feedback mechanisms, however. Because many of them are interacting simultaneously in the climatic system, it is extremely difficult to estimate quantitatively how many degrees of warming the climate will undergo for any given increase in greenhouse trace gases.
Unfortunately, there is no period in Earth history that investigators can examine when carbon dioxide concentrations in the atmosphere were, say, twice what they are today and whose climatic conditions are known with a high degree of certainty. For this reason, investigators cannot directly verify their quantitative predictions of greenhouse warming on the basis of historical analogs. Instead, they must base their estimates on climatic models. These are not laboratory models; since no laboratory could approach the complexity of the real world.
Rather, they are mathematical models in which basic physical laws are applied to the atmosphere, ocean, and glaciers; the equations representing these laws are solved with computers with the aim of simulating the present terrestrial climate. Many such models have been built during the past few decades. The calculations roughly agree that, if the atmospheric CO2 concentrations were to double, the Earth’s surface temperature would warm up somewhere between 1° and 5° C. As a point of comparison, the global surface temperature of the Earth during the Ice Age 18,000 years ago was on average about 5° C lower than it is today. Thus, a temperature change of more than one or two degrees worldwide represents a very substantial alteration.
To estimate the importance of climatic changes to society, researchers need not, however, study global average temperature so much as the possible regional distribution of evolving patterns of climatic change in the future. Will it, in the year 2010, be drier in Iowa, wetter in Africa, more humid in New York, or too hot in India? Unfortunately, to predict reliably the fine-scale regional response of variables, such as temperature and rainfall, requires climatic models of greater complexity (and cost) than are currently available. At present, there is simply no consensus among knowledgeable atmospheric scientists that the regional predictions of state-of-the-art models are reliable.
Nevertheless, most experts agree that the following coherent regional features might well occur by about the year 2035: wetter subtropical monsoonal rain belts; longer growing seasons in high latitudes; wetter springs in high and middle latitudes; drier midsummer conditions in some mid-latitude areas (a potentially serious agricultural and water supply problem in major grain-producing nations); increased probability of extreme heat waves (with possible health consequences for people and animals in already warm climates); and an increase in sea level by a few tens of centimeters. Considerable uncertainty remains in these regional estimates, even though many plausible scenarios have been investigated.
The possible impact on environment and society needs to be determined from a given set of scenarios for regional climatic change. Most important are the effects on crop yields and water supply. Also of concern is the potential for altering the range or number of pests that affect plants and diseases that threaten the health of humans or lower animal forms. Another point of interest is the effect on unmanaged ecosystems. For example, ecologists are much concerned that the rapid rate at which tropical forests are being destroyed due to human expansion is eroding the genetic diversity of the Earth. Since the tropical forests are in a sense repositories for the bulk of living genetic materials on Earth, the world is losing some of its irreplaceable biologic resources.
In addition, substantial future changes to tropical rainfall have been predicted by climatic models. This means that present-day reserves (or refugia) may be unable to sustain those species that they are designed to protect if rapidly evolving climatic change produces conditions in the refugia sufficiently different from those of today.
Estimating the distribution of economic winners and losers, given a scenario of climatic change, involves more than simply looking at the total dollars lost and gained, were it possible to make credible calculations. It also requires looking at such important equity questions as who wins and who loses and how the losers might be compensated and the winners taxed. If the corn belt in the United States, for example, were to “move” north by several hundred kilometers as a result of greenhouse warming, then $1,000,000,000 a year lost on Iowa farms could well become Minnesota’s $1,000,000,000 gain. Some macroeconomic views of this hypothetical problem, from the perspective of the United States as a whole, might see no net losses.
Much social consternation, however, would be generated by such a shift in climate, particularly since the cause of the change would be economic, CO2-producing activities. Even the perception that the economic activities of one nation could create climatic changes that would be detrimental to another has the potential for disrupting international relations, as is already occurring in the case of acid rain. While the details are still difficult to establish, there is considerable scientific consensus that regional climatic changes of environmental
significance are likely to occur over the next few generations. In essence, the environmental changes induced by CO2 and other greenhouse gases create what might be termed a problem of “redistributive justice.”
The last stage in dealing with the greenhouse effect concerns the matter of appropriate policy responses. Three classes of action could be considered. The first is mitigation through purposeful intervention to minimize the potential effects on the environment. For example, dust might be deliberately spread in the stratosphere to reflect some sunlight and cool the climate as a countermeasure to inadvertent warming by a CO2 buildup. This solution suffers from the obvious flaw that, since there is uncertainty associated with predicting the inadvertent consequences of human activities, substantial uncertainty must surround any deliberate attempts at climatic modification.
It may be that existing computer models overestimate changes and underestimate proposed modification schemes. If so, human intervention would be the proverbial “cure worse than the disease.” In that case, the prospect for international tensions is so staggering and society’s legal instruments to deal with the problem so immature that it is hard to imagine any substantial mitigation strategies for the foreseeable future.
The second is simply adaptation. Adaptive strategies, favoured by many economists, would simply allow society to adjust to environmental changes without attempting to mitigate or prevent the changes in advance. It would be possible to accommodate climate change, for instance, by planting alternative crop strains that would be more widely adapted to a whole range of plausible climatic conditions anticipated for the future. Such adaptive strategies are often recommended because of the uncertain nature of the redistributive character of future climatic change.
The third type of policy response, namely prevention, is the most active. It might involve discontinuing the use of chlorofluorocarbons and other potential ozone-reducing gases, or a reduction in the amount of fossil fuel used around the world. These policies, often advocated by environmentalists, are controversial because in some cases they require substantial immediate investment as a hedge against large future environmental change, change that cannot be predicted precisely. What can be considered practical options are increasing the efficiency of energy end use (in a word, conservation), the development of alternative energy systems that are not based on fossil fuels, or, more radically, establishing a “law of the air.”
Such a measure was proposed in 1976 by two American scientists, the anthropologist Margaret Mead and the climatologist William W. Kellogg. To curtail CO2 emissions, they suggested setting up a global standard and assigning various nations the right to generate certain levels of the gas. The “appropriate” policy response will depend not only on scientific information about the probabilities and consequences of physical, biologic, and social impact scenarios but also on the value judgments of individuals, groups, corporations, and nations as to how to deal with the potential distribution of gains and losses implied by the buildup of carbon dioxide and other trace gases.
There is no scientific answer as to how society should act and no scientific basis for any particular policy choice. All science can do is provide scenarios and assess the probabilities and consequences of various plausible alternatives. The public and government leaders need to understand that decisions have to be made in the face of scientific uncertainty by optimizing clearly stated sets of often conflicting values.
Atmospheric problems are fundamentally global in both cause and effect. Moreover, they are inextricably interwoven with the overall problem of global economic development and cannot be removed from the discussion of population, resources, environment, and economic justice. Rich nations cannot ask poor nations to abandon their development plans simply because of the potential CO2 problem. Any global strategies for preventing CO2 buildup will require cooperation between rich and poor nations on the transfer of knowledge, technology, and capital.
A further point of contention in the dialogue between developed and developing countries will be the question of population-growth rates. This is quite relevant to the CO2 issue simply because total emission is the per-capita emission rate times the population size. If there is a movement in the future toward parity between rich and poor in per-capita use of fossil fuels, then population growth (which is occurring predominantly in Third World countries) will become as important a factor in the long-term CO2/climate problem as is high per-capita use of fossil fuels today (largely a problem of developed nations). Furthermore, there is an ethical question associated with atmospheric problems: Do we have the right to commit future generations to unprecedented atmospheric perturbations without actively attempting to prevent or at least anticipate them? To be sure, there is much uncertainty. It is safe to say, however, that humankind is abusing the atmospheric environment faster than it understanding it. Clearly, some of the uncertain consequences could be serious and even irreversible.
The largest climatic change to occur on Earth since the dinosaurs inhabited the planet some 100,000,000 years ago is the seasons. Temperature differences between winter and summer are many tens of degrees—in some cases 10 times—larger than those anticipated over the next century from an intensification of the greenhouse effect. Existing climatic models fare well in predicting this massively large seasonal change. Thus, it is not likely that climatic models are making a fundamental error in predicting future warming and at the same time reproducing the very large seasonal differences that they do. On the other hand, seasonal variations occur within a year, whereas the rate of change for an increase of trace gases in the atmosphere is many decades.
In effect, seasonal simulation skill is a piece of circumstantial evidence and not final proof of model validity. Nevertheless, it certainly lends a great deal of credence to the basic theoretical claims behind the greenhouse effect. What other bits of supporting evidence can be brought to bear? The furnace like conditions on Venus and the frigid air of Mars were noted above. When data on the atmospheric composition of these planets are fed to the computer models used to predict greenhouse warming on the Earth, corresponding changes are indeed produced—i.e., temperatures on Venus are hundreds of degrees warmer and those of Mars are hundreds of degrees colder.
Is this proof that the mathematical models are accurately predicting the terrestrial greenhouse effect? Again, the evidence is only circumstantial but strong. One other piece of circumstantial evidence can be cited—namely, the ability of present climatic models to reproduce the vastly different conditions on Earth during ancient times when ice stretched all the way from the Arctic to northern Europe and the mid-Atlantic region of the United States.
Finally, what has been happening to the Earth’s climate during the past 100 years? If one takes all of the reliable records of temperature readings and averages them for the world, one finds that the Earth has warmed up about 0.5° C over the past century. At the same time, it has been determined that the CO 2 level is about 25 percent higher today than it was a century ago. Therefore, what has happened on Earth is broadly consistent with what the climatic models suggest should have been happening. The warming of the planet over the past century is yet another piece of circumstantial evidence, not conclusive proof in and of itself, but still important.

Leave a Reply

Your email address will not be published. Required fields are marked *