Reliable Replacement Warheads

Old Montreal

Since July 16th, 1945 the United States has been a nuclear power. The first American thermonuclear weapon was detonated in 1952. During the span of the Cold War, tens of thousands of hydrogen bombs were assembled and mounted inside artillery shells, torpedoes, submarine launched missiles, cruise missiles, land-based ICBMs, and aircraft-mounted bombs. Now, these weapons are starting to age and a debate has emerged on what should be done with them.

Many of these weapons are highly complex. A standard submarine-based missile has a conical warhead. Inside is a uranium casing that serves to contain the original blast until a maximum amount of fission has occurred. At the bottom of that casing is a ‘pit’ of plutonium which is at a sub-critical density. Around that is a casing of brittle, toxic, neutron-reflecting beryllium. Inside it may be a cavity containing tritium and deuterium gas (in the case of a “boosted” primary). Around the beryllium outer sphere is a shell of high explosives designed to explode with fantastic precision, crush the plutonium pit to supercritical density, and initiate the fission reaction.

This whole assembly exists to initiate fusion in the ‘secondary,’ located higher in the outer uranium casing. The material that undergoes fusion – usually lithium deuteride – is wrapped around another sphere of uranium and is, in turn, wrapped in more uranium. All this is to create the largest possible yield in a relatively small and light package. The small size and conical shape allow eight or more of these devices to be placed on a single missile and then independently targeted once that missile is at the height of its ascent.

The 2008 budget allocated $6.5 billion for the maintenance of the American nuclear stockpile. That consists of 9,900 assembled warheads – 5,700 of them deployed operationally. In addition to these, about 7,000 plutonium pits are stored at the Pantex Plant in Amarillo, Texas. As the weapons age, concerns are developing about their reliability. They all contain high explosives, toxic chemicals, and corrosive agents. While it is possible to upgrade many of the non-nuclear components and replace them with more stable variants, the newly assembled bombs could not legally be tested: potentially leaving military commanders in doubt about their usability.

That is, in essence, the core of the ongoing debate about the Reliable Replacement Warhead (RRW). The program would begin by refurbishing 100 kiloton W76 warheads, which is already undergoing a less ambitious retrofitting. The hope is that the program can produce weapons with long durability and lower maintenance costs, and be able to do so without requiring full-scale tests of the devices, as were conducted in Nevada and the Marshall Islands during the Cold War. I won’t get into the details of the debate here. More than sufficient information exists online and in recent newspapers and magazines. What is less frequently considered are some of the aspects of international law relevant to nuclear weapons.

The whole program should remind people about an oft-forgotten element of the Nuclear Non-Proliferation Treaty. Everyone remembers the bit about signatories without nuclear weapons pledging not to acquire them. People forget that the treaty also obliges existing nuclear powers to reduce their arsenals as part of an overall progression towards de-nuclearization. Upgrading your nuclear arsenal to endure further decades of operational status is hardly consistent with this requirement. It also signals to other states that the United States continues to consider operationally deployed nuclear weapons an important part of their overall military strategy.

Individuals and organizations contemplating a sizable RRW program might also do well to re-read the Advisory Opinion on the Legality of the Threat or use of Nuclear Weapons set down by the International Court of Justice. While such legal considerations are relatively unlikely to affect whatever decisions are made in relation to the RRW, examining the status of the law can be a good way to reach decisions about the respective rights and obligations of states.

Index of climate posts

Fruit bar

For the last while, my aim on this blog has been both to entertain readers and to provide some discussion of all important aspects of the climate change problem. To facilitate the latter aim, I have established an index of posts on major climate change issues. Registered users of my blog can help to update it. Alternatively, people can use comments here to suggest sections that should be added or other changes.

The index currently contains all posts since I arrived in Ottawa. I should soon expand it to cover the entire span for which this blog has existed.

Mechanism design theory

Window and shadows in Montreal

The 2001 Nobel Prize in Economics was awarded to George Akerlof, Michael Spence, and Joseph Stiglitz for their work on asymmetric information. One standard assumption in neoclassical economic models is that all participants in a transaction have ‘perfect information’ about the goods or services being exchanged. The field of behavioural economics is now seeking to deepen such models, so that they can better reflect the kind of dynamics that exist in real markets.

Asymmetric information is a key factor in the functioning of real markets. When you buy a used car, the person at the lot probably knows more about it than you do. The salesperson knows more about used cars in general, may have spoken with the original seller, and may have investigated this specific car. Conversely, you know more about your health risks than your health insurer (provided you live somewhere where health insurance is private). You might know, for instance, that all your relatives die of heart attacks on their 35th birthdays and that you personally drink 3L of whisky per day.

This year’s Nobel Prize in Economics was awarded to Leonid Hurwicz, Eric S. Maskin, and Roger B. Myerson for their work on mechanism design theory. The basic purpose of the theory is to deal with problems like those of assymetric information: take a situation where people would normally have an incentive to behave badly (lie, cheat, etc) and establish rules to make it no longer in their interest to do so. We might, for instance, require used car salespeople to provide some sort of guarantee, or we might allow health insurers to void the policies of individuals who lie about their health when premiums are being set.

Reading about mechanism design feels a bit like watching engineers try to create religious commandments. This section from the Wikipedia entry illustrates what I mean.

Mechanism designers commonly try to achieve the following basic outcomes: truthfulness, individual rationality, budget balance, and social welfare. However, it is impossible to guarantee optimal results for all four outcomes simultaneously in many situations.

While it does seem a bit counterintuitive to try to achieve these things through economic means, it is probably more durable than simply drilling axioms into people’s heads. That is especially true when the counterparty they are dealing with is some distant corporation; people who would never cheat someone standing right in front of them are much more willing to deceive or exploit such a distant and amorphous entity.

Vermont’s regulatory victory

Well known as a progressive place, Vermont seems to have recently struck a notable blow in the fight to develop regulatory structures to address climate change. A heated court case had developed between car manufacturers and the state government about whether the latter could impose tough emission limits on cars and light trucks. William Sessions, a federal judge, found in favour of the state’s right to do so. You can read the entire judgment here: PDF, Google Cache.

Among the arguments brought forward by the auto makers (and rejected by Sessions) were that the regulations were unconstitutional, impossible to meet with existing technology, economically disastrous, ineffective, and anti-consumer. The case also involved a reasonably complex jurisdictional issue regarding California’s special exemptions to set environmental policy more broadly than other states.

There do seem to be a suspicious number of cases where industries have followed this trajectory in relation to new regulations: saying that they are unnecessary, saying they would be financially ruinous, then quietly adapting to them with little fuss once they come into force. The phase-out of CFCs in response to the Montreal Protocol is an excellent example. This trend is explicitly recognized in the ruling:

Policy-makers have used the regulatory process to prompt automakers to develop and employ new, state-of-the-art technologies, more often than not over the industry’s objections. The introduction of catalytic converters in the 1970s is just one example. In each case the industry responded with technological advancements designed to meet the challenges…

On this issue, the automotive industry bears the burden of proving the regulations are beyond their ability to meet…

In light of the public statements of industry representatives, history of compliance with previous technological challenges, and the state of the record, the Court remains unconvinced automakers cannot meet the challenges of Vermont and California’s GHG regulations.

The fact that Chinese cars have to meet better emission standards than American ones strongly suggests that the objections of industry are bogus. Given the price inelasticity of demand for gasoline (people keep buying about the same amount when the price goes up), regulating fuel efficiency and emissions does seem like an efficient way to reduce GHG emissions in the transport sector.

Unlocking the iPhone

There is a lot of huffing and puffing going on about people ‘hacking’ the iPhone. At the heart of the matter are the twin definitions of the verb ‘hack’ that are not always well recognized. Many people take ‘hacking’ to mean malicious invasion of electronic systems, for instance in order to steal credit card numbers. An older definition of the word is simply to tinker with technology. In this sense, a ‘hack’ might be a clever modification of a bicycle or a mobile phone.

Apple has been exploiting all the hype about the iPhone to make highly preferential deals with individual carriers. This has happened in the US and UK already, doubtless with more to follow. These arrangements seem to benefit Apple and the carriers, but I doubt very much that they benefit the consumer. It is like Toyota building cars that can only be filled at Shell service stations, then trying to prosecute people who try to remove the restrictions, allowing them to be filled elsewhere. Just as the people own the cars and should thus be free to modify them in ways that do not endanger others, people who own iPhones should be able to tinker with them. Likewise, just as the Toyoto-Shell case is clear-cut collusion of the kind governmental competition authorities police, so too does the Apple-cell carrier situation.

See also: Forbidden features and If you can’t open it, you don’t own it.

Liability and computer security

One of the major points of intersection between law and economics is liability. By setting the rules about who can sue brake manufacturers, in what circumstances, and to what extent, lawmakers help to set the incentives for quality control within that industry. By establishing what constitutes negligence in different areas, the law tries to balance efficiency (encouraging cost-effective mitigation on the part of whoever can do it most cheaply) with equity.

I wonder whether this could be used, to some extent, to combat the botnets that have helped to make the internet such a dangerous place. In brief, a botnet consists of ordinary computers that have been taken over by a virus. While they don’t seem to have been altered, from the perspective of users, they can be maliciously employed by remote control to send spam, attack websites, carry out illegal transactions, and so forth. There are millions of such computers, largely because so many unprotected PCs with incautious and ignorant users are connected constantly to broadband connections.

As it stands, there is some chance that an individual computer owner will face legal consequences if their machine is used maliciously in this way. What would be a lot more efficient would be to pass part of the responsibility to internet service providers. That is to say, Internet Service Providers (ISPs) whose networks transmit spam or viruses outwards could be sued by those harmed as a result. These firms have the staff, expertise, and network control. Given the right incentives, they could require users to use up-to-date antivirus software that they would provide. They could also screen incoming and outgoing network traffic for viruses and botnet control signals. They could, in short, become more like the IT department at an office. ISPs with such obligations would then lean on the makers of software and operating systems, forcing them to build more secure products.

As Bruce Schneier has repeatedly argued, hoping to educate users as a means of creating overall security is probably doomed. People don’t have the interest or the incentives to learn and the technology and threats change to quickly. To do a better job of combating them, our strategies should change as well.

HCFC phaseout

While international negotiations on climate change don’t seem to be going anywhere at the moment, some further tightening has been agreed within the regime that combats substances that deplete the ozone layer (the Vienna Convention and Montreal Protocol). The parties have decided to speed up the elimination of hydrochlorofluorocarbons (HCFCs), which were permitted as temporary substitutes for the chlorofluorocarbons (CFCs) that destroy ozone most energetically.

The BBC reports that:

The US administration says the new deal will be twice as effective as the Kyoto Protocol in controlling greenhouse gas emissions.

This seems quite implausible to me. HFCs, PFCs, and SF6 collectively contribute about 1% of anthropogenic warming. As such, their complete elimination would have a fairlylimited effect. In addition, the Vienna Convention process always envisioned their elimination, so there is nothing substantially new about this announcement, other than the timing. An agreement for eliminating HCFCs has been in place since 1992:

1996 – production freeze
2004 – 35% reduction
2010 – 65% reduction
2015 – 90% reduction
2020 – 99.5% reduction
2030 – elimination

While it does seem that this timeline isn’t being followed, it remains to be seen whether this new announcement will have any effect on that.

The Kyoto Protocol targets a six different greenhouse gases, most importantly the carbon dioxide that constitutes 77% of anthropogenic climate change. If it had succeeded at reducing emissions among Annex I signatories by 5.2%, as planned, it would have been both a significant contribution and an important starting point.

None of this is to say that we shouldn’t welcome the HCFC phaseout. If nothing else, it should help with the recovery of the ozone layer. We just need to be cautious about accepting statements like the one quoted.

Oryx and Crake

Fire truck valves

Margaret Atwood‘s novel, which was short-listed for the Booker Prize, portrays a future characterized by the massive expansion of human capabilities in genetic engineering and biotechnology. As such, it bears some resemblance to Neal Stephenson‘s The Diamond Age, which ponders what massive advances in material science could do, and posits similar stratification by class. Of course, biotechnology is an area more likely to raise ethical hackles and engage with the intuitions people have about what constitutes the ethical use of science.

Atwood does her best to provoke many such thoughts: bringing up food ethics, that of corporations, reproductive ethics, and survivor ethics (the last time period depicted is essentially post-apocalyptic). The degree to which this is brought about by a combination of simple greed, logic limited by one’s own circumstances, and unintended consequences certainly has a plausible feel to it.

The book is well constructed and compelling, obviously the work of someone who is an experienced storyteller. From a technical angle, it is also more plausible than most science fiction. It is difficult to identify any element that is highly likely to be impossible for humanity to ever do, if desired. That, of course, contributes to the chilling effect, as the consequences for some such actions unfold.

All in all, I don’t think the book has a straightforwardly anti-technological bent. It is more a cautionary tale about what can occur in the absence of moral consideration and concomitant regulation. Given how the regulation of biotechnology is such a contemporary issue (stem cells, hybrid embryos, genetic discrimination, etc), Atwood has written something that speaks to some of the more important ethical discussions occurring today.

I recommend the book without reservation, with the warning that readers may find themselves disturbed by how possible it all seems.

Hired guns

I heard a lot fair amount about mercenaries when I was at Oxford, but this is the most interesting thing to happen in relation to them in decades. The degree to which war has been privatized would probably shock Eisenhower.

What remains to be seen is the degree to which the United States will respect the sovereignty of the democratic government that all the entire second Iraq war was meant to create.

Some respite for bluefins

As of today, the European Commission has banned the fishing of Bluefin Tuna (Thunnus thynnus) in the Mediterranean and Eastern Atlantic. Good for them, though it is a bit late. Stocks of this impressive and long-lived creature have already been decimated globally.

[Update: 21 September 2007] Jennifer Jacquet has more about this, over on Shifting Baselines.

[Update: 2 December 2007] Shifting Baselines has even more on this.