Dangerous anthropogenic interference

The stated objective of the United Nations Framework Convention on Climate Change is to achieve “stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.” The most problematic aspect of this mandate is the open definition of ‘dangerous anthropogenic interference.’ Given that we have direct ice core evidence that concentrations of carbon dioxide are higher than at any point in the past 650,000 years – along with indirect evidence that this is the peak for the last 20 million years – it is fair to say that we are already interfering dangerously with the climate system.

Of course, one cannot go straight from showing elevated CO2 to ascribing danger. That said, the link between greenhouse gasses and increases in radiative forcing and temperature is incontrovertible. So too, the realities of icecap and glacier melting and ocean acidification. The question is no longer about whether or not we will cause dangerous interference, but how much danger we are willing to tolerate in exchange for less rapid and comprehensive changes to our high-carbon lifestyles.

Learning about lithosphere-atmosphere interactions from the cryosphere

The European Project for Ice Coring in Antarctica (EPICA) has recently announced results confirming that the long-term regulation of carbon dioxide in the atmosphere is largely a geological phenomenon. Carbon dioxide is naturally introduced into the atmosphere through volcanic activity and naturally removed through the weathering of rock and the deposition of carbon-laden rock in deep ocean sediments.

On the basis of evidence collected from a 3270 metre Antarctic ice core, the EPICA team determined that the atmospheric concentration of carbon dioxide underwent a long-term change of 22 parts per million over the 610,000 years before industrialization. This period covers five complete glacial-interglacial cycles. Since the Industrial Revolution, however, concentrations have risen by about 100 ppm – an overall rate 14,000 times higher.

Probably the most important thing to take from this is that the current behaviour of the global carbon system is likely to be different from that which has been dominant across geological time, simply because such a huge volume of carbon dioxide has been released through the burning of fossil fuels and deforestation.

Beetle-kill and carbon dioxide

Positive feedbacks are one of the most worrisome aspects of climate change. Viscious spirals could make controlling the problem far more difficult and, if we wait too long to act, potentially impossible to deal with. A new article in Nature suggests that the pine beetle epidemic in British Columbia has turned the forests there into net carbon emitters:

In the team’s model, a pine forest untouched by beetles but with a normal amount of logging is a slight carbon sink, sucking up more carbon (as carbon dioxide) than it loses (either as carbon dioxide or as timber). The only exception to this is when forest fires convert the forest to a net source, as they did in 2003. The beetles have an even bigger effect — in their worst year releasing 50% more carbon than the 2003 fires — and act over longer time scales, with additional logging making things even worse.

According to Werner Kurz, Natural Resources Canada’s senior research scientist, the total emissions associated with the outbreak will be about 990 megatonnes by 2020 – about 1.5 years worth of total Canadian emissions at present levels.

Eventually, the pine beetles will find themselves in the position of having nothing left to eat and the epidemic will taper off. What is nevertheless suggested by this situation is the possibility that climate change can lead to degraded ecosystems which hold less carbon dioxide, thus further contributing to climate change.

Romm’s fourteen wedges

Red spraypaint

Joseph Romm, whose book I reviewed previously, has a new blog post up outlining what would be necessary to stabilize global concentrations of greenhouse gasses below 450 parts per million of CO2 equivalent. It is explained in terms of ‘stabilization wedges’ – each of which represents a reduction of one gigatonne (billion tonnes) below business as usual projections. In total, he says 14 are necessary by 2050 and suggests the following list:

  1. One wedge of vehicle efficiency — all cars getting 60 mpg, with no increase in miles traveled per vehicle.
  2. One of wind for power — one million large (2 MW peak) wind turbines.
  3. One of wind for vehicles — another 2000 GW wind. Most cars must be plug-in hybrids or pure electric vehicles.
  4. Three of concentrated solar thermal — about 5000 GW peak.
  5. Three of efficiency — one each for buildings, industry, and cogeneration/heat-recovery for a total of 15 to 20 million gwh.
  6. One of coal with carbon capture and storage — 800 GW of coal with CCS.
  7. One of nuclear power — 700 GW plus 10 Yucca mountains for storage.
  8. One of solar photovoltaics — 2000 GW peak (or less PV and some geothermal, tidal, and ocean thermal).
  9. One of cellulosic biofuels — using one-sixth of the world’s cropland (or less land if yields significantly increase or algae-to-biofuels proves commercial at large scale).
  10. Two of forestry — End all tropical deforestation. Plant new trees over an area the size of the continental U.S.
  11. One of soils — Apply no-till farming to all existing croplands.

No government anywhere has this level of ambition today. Just providing the nuclear wedge would require building 26 new plants a year, as well as ten geological repositories the size of Yucca Mountain. Providing the carbon capture wedge will require building a quantity of infrastructure capable of putting the same volume of CO2 into the ground as we are presently removing, when it comes to oil.

Romm does an excellent job of showing what a huge and civilizational challenge climate change really is. At the same time, while there is no technical reason for which fourteen wedges is impossible, one certainly doesn’t have the sense that anything like the necessary level of political will exists today. President Bush’s ludicrous announcement that the US will try to stop emissions growth by 2025 is closer to the mainstream of thinking in most places. At least a few people would rather doom future generations to an inhospitable planet than buckle down and make these changes.

Once again, we are left with the question of what might convince people to change. If fourteen wedges are what’s required, it seems virtually impossible that the rosy ‘it will all pay for itself’ possibility will play out. It is hard to imagine anything short of a catastrophe providing the necessary motive force, and it will take a catastrophe that unites the world in common effort, rather than divides it in fear or suspicion.

In short, the situation does not leave a person feeling optimistic.

The Black Swan

Dirty machinery

Nassim Nicholas Taleb‘s The Black Swan: The Impact of the Highly Improbable is an unusual, excellent book with broad applicability. In particular, those concerned with finance or the use of mathematics in social disciplines (politics, economics, international relations, etc) should strongly consider reading it. They will probably find it uncomfortable – as it demonstrates how their ‘rigorous’ disciplines are built on sand – but they will be wiser people if they can accept that.

Taleb’s main point is that life is dominated by improbable events of huge consequence. This is obscured to us for a number of reasons: not least, because we are able to look back and construct plausible after-the-fact stories about why things turned out the way they did. Because we fail to appreciate how explosively improbable the world is, we leave ourselves far more vulnerable than our predictions suggest. Indeed, the biggest thing Taleb attacks is the very notion that we can make good predictions about the future. ‘Black Swans’ are those improbable events of massive consequence which we are able to rationalize after the fact, though we could not have predicted them before. They can be negative (the sudden collapse of a bank) or positive (the amazing success of an obscure book). They relate to the way in which the world is skewed towards extremes when it comes to things like income or the importance of a publication.

Taleb’s book consists of an odd combination of anecdote, mathematics, scholarly and literary references, personal history, and diatribes. Throughout, one has the impression of engaging in conversation with an unusually fascinating fellow – albeit one who takes special pleasure in cutting down those who disagree with him (the text ignores no opportunity for mocking and insulting economists and financial analysts, in particular).

The lessons Taleb says one should draw from an appreciation of Black Swans are noteworthy and sensible. First, we should maximize our chances of getting lucky and finding a positive Black Swan. In investment terms, that means making lots of small bets on long shots that might really pay off. In life more generally, it basically means trying new things – visiting the restaurant you never normally would, going on the blind date, seizing the opportunity to meet with the big shot publisher to explain your book idea. Second, we should minimize our exposure to negative Black Swans that can wipe us out. That means definitely avoiding standard financial instruments like mutual funds, distrusting any risk assessment based on the bell curve, and appreciating that blue-chip stocks might collapse despite decades of steady growth. His overall financial prescription is to put whatever you are unwilling to lose in US government bonds, while using the rest to make long-shot speculative bets.

It would be very interesting to see Taleb’s ideas applied directly to International Relations (the capital letters mean ‘IR the discipline’ rather than IR the phenomenon) or climate change. Within IR, there are a few dissenters who appreciate just how inappropriate all the statistics and quantitative methods being trotted out really are. They would find Taleb’s book to be confidence-boosting, whereas the number obsessed IR scholars concentrated in the United States would probably respond to it with as much anger as hedge fund managers.

When it comes to climate change, the Black Swan idea seems relevant in several ways. First, it creates a healthy scepticism about projections: whether they are for economic growth, greenhouse gas emission levels, or greenhouse gas reductions associated with certain policies. Secondly, it reveals how fallacious it is to say: “Humanity muddled through so far, therefore we can handle climate change just like any previous crisis.” Thirdly, it sheds light on scenario planning in the face of possible disastrous outcomes with unknown probabilities attached.

It is safe to say that anybody interested in how history is written or how people try to come to grips with an uncertain future will find something of value in this text. At the very least, the colourful asides provide plenty of mental fodder. At the very most, appreciation for Black Swans might significantly alter how you live your life.

Experimenting on model brains

Milan and Paul in a diner

While taking the bus back from Toronto last night, I found myself wondering again about the brain-in-a-computer issue. While there are legitimate doubts about whether it would ever actually be possible to build a model akin to a human brain inside a machine, it is already the case that people are building successively better (but still very poor) approximations. Eventually, the models may become good enough for the following ethical question to arise.

What I wondered about, in particular, was the ethics of experimenting on such a thing. I have heard people mention, from time to time, the possibility of a ‘grandmother neuron’ charged specifically with recognizing your grandmother. The idea seems very unlikely, given that neurons die with regularity and people rarely completely and exclusively forget how to see their grandmothers. That being said, there is lots of experimental evidence that brain injuries can produce interesting results. As a consequence, the unfortunate, brain-damaged victims of car crashes sometimes find themselves to be the focus of intense interest among cognitive and behavioural psychologists.

If we did have a model brain (say a semi-realistic model fly or beetle brain), we could experiment by disabling sections of it to model the effects. By extension, the same could be done with rat, monkey, or human brains. The question then becomes: is there an ethical difference between experimenting on a mathematical model that behaves like a human brain and experimenting on a real human brain? Does that distinction lie in the degree to which the model is comprehensive and accurate? For instance, a good model brain might respond with terror and confusion if experimented upon.

This is yet another way of getting at the whole ethical question of whether people are simply their material selves, or whether there is something metaphysical to them. I maintain extremely strong doubts about the latter possibility, but still feel that there is an ethical distinction between experimenting on crude or partial brain models and experimenting on complete ones or real brains. I am much less sure about whether there is a meaningful ethical distinction between the last two options.

Frogs don’t let themselves get boiled

Apparently, the oft-repeated ‘fact’ that a frog placed in a pot of slowly-warming water will eventually let itself be cooked is entirely false. A frog dropped straight into boiling water will probably be horribly injured before it can get out (if it ever manages to); one put into a slowly warming pot will leave when the water gets uncomfortable. So says Professor Doug Melton, of the Harvard University Biology Department, among others.

It is easy to understand why this ‘fact’ has become so commonly cited: it seems like a pat little bit of wisdom from the animal world. Its falsehood provides a more important lesson about verifying whether convenient seeming stories are actually correct, even when they seem useful for livening up your argument.

The seductiveness of the bell curve

Cat vandalism

Among the statistically inclined, there are few more elegant bits of mathematics than the bell curve or ‘normal’ distribution. At the centre, you have the most predictable outcome for any variable: say, the amount of food you eat on the average day. Higher and lower numbers close to the mean are still quite probable, but each possibility gets less and less likely as you move farther out. While you probably vary your food intake by hundreds of grams a day, it is rarer to vary by kilograms and quite rare to vary by tens of kilograms.

The reason the bell curve in particular is so charming is that it gives us the opportunity to assign probabilities to things. For instance, we can take the mean weight of airplane passengers, the standard distribution in the population (a measure of how much variation there is), and come up with a statement like: “99.9% of the time, this plane will be able to seat 400 people and have sufficient power to take off.”

That being said, there are big problems with assuming that things are like bell curves. For one, they might not be ‘unimodal.’ We can imagine a bell curve as being like a mountain of probability, where the peak is the mean and the slopes on either side represent less probable outcomes. Some distribution ‘mountains’ have more than one peak, however. A distribution of the heights of humans, for instance, has a male and female peak. If we took the male peak as the mean and tried to predict heights based on the standard deviation for the whole sample, we would find that there are a lot of unexpectedly short people in the sample (women).

Another big problem is that the peak might not be symmetrical. Consider something like the amount of money earned in an hour by a reckless gambler or stock broker. On one side of his average earnings are all the below-average instances, which are probably many. On the other side, the slope may taper off. On a few extremely lucky hours, they might earn dramatically more than is the norm, and do so in a way not mirrored in the shape of the distribution on the other side. Assuming that the distribution is like a bell curve will make us assign too low a probability to these outcomes.

The last problem I am going to talk about now is a venerable one, commonly associated with Bertrand Russell. Imagine you see a trend line that jitters around a bit, but always moves upwards. Asked what is likely to happen next, you would probably suggest a jump comparable to the mean increase between past intervals. Too bad the data series is grams of food being eaten by a turkey per day, and tomorrow is Thanksgiving. You might have a beautiful bell curve showing the mean food consumed by the turkey per day, but it might all fall apart because something that undergirded the distribution changed. Those whose pensions were heavily based on Enron stock have an acute understanding of this.

When their use is justified, bell curves are exceptionally useful. At the same time, using them in inappropriate circumstances is terrifically dangerous. Just because a stockmarket fall of X points is five standard deviations greater than the mean does not imply that it will happen 0.00005733% of the time, despite what bell curve equations and relatively soft-headed statistics instructors might tell you.

Odds guessing experiment

One of the subtle pleasures associated with reading this blog is the occasional opportunity to be experimented upon. Today is such a day.

Instructions:

  1. Read all these instructions before actually completing step two.
  2. Flip a coin.
  3. Please actually flip a coin. People who choose ‘randomly’ in their heads do not actually pick heads and tails equally. If you don’t have a coin use this online tool.
  4. If it landed heads, click here.
  5. If it landed tails, click here.
  6. When you click one of the links above, you will see a description of an event.
  7. Before looking at the comments below, estimate the probability of the event you see described happening in the next year.
  8. Write that as a comment, indicating whether you are answering the heads question or the tails question.

When you are done, you are naturally free to read the other question and the comments left by others.

Even if you don’t normally comment, please do so in this case. I want to get enough responses to permit a statistical comparison.

Choosing nuclear

Nuclear flowchart

The flowchart above illustrates one process through which we could collectively evaluate the desirability of nuclear power, given the potential risks and benefits associated with the technology. In my personal opinion, the answer to the first question is probably “yes,” though perhaps not to as large a degree as commonly believed. The second and third questions are much more up in the air, and necessarily involve uncertainty. We cannot know exactly what will be involved in building a massive new nuclear architecture before it is done; similarly, it cannot be known with certainty what would result from choosing conservation and renewables instead.

As for the third question, there are major questions about risk evaluation and risk tolerance. If the world keeps running nuclear plants, it is a statistical certainty that we will eventually have another serious nuclear accident. No nuclear state is without its contaminated sites, and none yet has a geological repository for wastes.

This post definitely isn’t mean to settle the question initially posed, but rather to clarify thinking on the issue and dismiss the automatic logical leap from “climate change is happening” to “build more fission plants.”