Mosul Dam

The Mosul Dam is one element of Iraq’s infrastructure that has survived the war so far, but which is apparently seriously threatened. Because was built on gypsum, which dissolves in water, it threatens to fail catastrophically as the result of small initial problems. A report from the US Army Corps of Engineers warned that the dam’s failure would drown Mosul under nearly 20m of water and parts of Baghdad under 4.5m. The 2006 report explained that:

In terms of internal erosion potential of the foundation, Mosul Dam is the most dangerous dam in the world. If a small problem [at] Mosul Dam occurs, failure is likely.

According to the BBC, the US Special Inspector General for Iraq Reconstruction (SIGIR) has stated that the dam’s foundations could give away at any moment. The report from the Corps of Engineers states that the dam’s failure could cause 500,000 civilian deaths. General David Petraeus and the American Ambassador to Iraq have both written to the Iraqi government expressing their severe concern.

The dam is 2,100m across and contains 12 billion cubic metres of water. It generates about 320 MW of electricity. Previous attempts at addressing the gypsum issue seem to have been botched. According to the Washington Post “little of the reconstruction effort led by the U.S. Embassy has succeeded in improving the dam.” Stuart Bowen, the special inspector general reviewing the efforts has said that “[t]he expenditures of the money have yielded no benefit yet.”

Today, the Iraq government has officially stated that concerns about a possible collapse are misplaced and that the dam is constantly monitored. Ongoing actions include reducing the amount of water in the reservoir and pumping grout into the foundation (a liquefied mixture of cement and other additives). Work is meant to begin next year on wrapping the foundations in concrete to make them more secure.

Obviously, a catastrophic dam collapse is the last thing Iraq needs. Hopefully, the dam will hold until a sensible refit can be carried out, and it will not find any wayward coalition munitions or insurgent bombs helping it towards disintegration.

Geoengineering: wise to have a fallback option

Sailing ship graffiti

Over at RealClimate they are talking about geoengineering: that’s the intentional manipulation of the global climatic system with the intent to counteract the effects of greenhouse gasses. Generally, it consists of efforts to either reflect more solar energy back into space or enhance the activity of biological carbon sinks. It has been mentioned here before.

The fundamental problem with all geoengineering schemes (from sulfite injections to plankton tubes to giant mirrors) is that they risk creating unexpected and negative side-effects. That said, it does seem intelligent to investigate them as a last resort. Nobody knows at what point critical physical and biological systems might tip into a cycle of self-reinforcing warming. Plausible examples include permafrost melting in the Arctic, releasing methane that heats the atmosphere still more, or the large-scale burning of tropical rainforests, both producing emissions and reducing the capacity of carbon sinks. If physical or biological systems became net emitters of greenhouse gasses, cutting human emissions to zero would not be sufficient to stop warming; it would simply continue until the planet reached a new equilibrium.

Given linear projections of climate change damages, we would probably be wisest to heed the Stern Review and spend adequately on mitigation. Given the danger of strong positive feedbacks, it makes sense to develop some fallback options for use in desperate times. It seems to me that various forms of geoengineering should be among them. Let us hope they never need to be used.

‘Enduring Freedom’ and Afghanistan

Montreal graffiti

Last night, I got into a brief conversation about the Taliban. It reminded me of a statement quoted at a Strategic Studies Group meeting I attended in Oxford:

People are being very careful not to be against the Taliban and ‘keep the balance’ so that they will not be punished for helping foreigners when the Taliban return.

-Police commander, Kandahar

This idea raises an important question about longevity. If the Taliban can outlast any deployment NATO will be able to maintain, it becomes essential to produce a government that will be able to hold its own against them in the long term. Otherwise, we are just delaying the transition back to Taliban rule. While I am definitely not an expert on the military or political situation in Afghanistan, it does not seem like the present Karzai government has that kind of capability, in the absence of direct military support from NATO.

The question thus becomes what, if anything, NATO can do to produce a (preferably democratic) Afghan government capable of enduring after their withdrawal. If that does not prove possible, the question becomes what we are hoping to achieve in Afghanistan, and whether any lasting good will result for the population as the result of the initial displacement of the Tabliban and the Al Qaeda elements they were supporting.

Securing against the wrong risk

This week’s Economist includes an unusually poor article on security. It explains that the upcoming Swiss election will be using quantum cryptography to transmit the results from polling stations to central tabulation centres. It alleges that this makes the whole electoral process more secure. This is wrong.

What this is essentially saying is that there would otherwise be a risk of manipulation of this data in transit. The chief polling officer at one station might send a set of figures that get altered by a malicious agent en route to the tabulation centre. Having an encrypted link prevents this man-in-the-middle attack. It does not prevent the polling officer from lying, or the person at the tabulation centre from manipulating the results they input into the counting machines. It doesn’t prevent ballot-stuffing, vote buying, or the compromise of computer systems used to collect or tally votes. In short, it provides no security for the parts of the electoral process that are actually vulnerable to attack. In the absence of good security at the more vulnerable points in the electoral process, using quantum cryptography is like putting a padlock on a paper bag.

Hopefully, they will print my brief letter taking them to task for allowing themselves to be seduced by technology, rather than think sensibly about security.

[Update: 29 October 2007] Bruce Schneier has written about this. Unsurprisingly, he agrees that using quantum cryptography does not increase the security of the Swiss election.

Unicity distance

Sky, moon, and wires

In order to be able to decipher a secret message through cryptanalysis, you need to have a sufficient quantity of data to evaluate whether it has been done properly. If all a cryptoanalyst has to work with is enciphered text (say, in the form of an intercepted message) the attempt to decipher it is called a ciphertext-only attack. For a variety of reasons, these are very tricky things to accomplish. The element described below is one of the most basic.

In order to understand why a message of sufficient length is important, consider a message that consists only of a single enciphered phone number: “724-826-5363.” These numbers could have been modified in any of a great number of ways: for instance, adding or subtracting a certain amount from each digit (or alternating between adding and subtracting). Without knowing more, or being willing to test lots of candidate phone numbers, we have no way of learning whether we have deciphered the message properly. On the basis of the ciphertext alone, 835-937-6474 is just as plausible as 502-604-3141.

Obviously, this is only a significant problem for short messages. One could imagine ways in which BHJG could mean ‘HIDE’ or ‘TREE’ or ‘TRAP.’ The use of different keys with the same algorithm could generate any four letter word from that ciphertext. Once we have a long enough enciphered message, however, it becomes a lot more obvious when we have deciphered it properly. If I know that the ciphertext:

UUEBJQPWZAYIVMNAZSUQPYJVOMDGZIQHWZCX

has been produced using the Vigenere cipher, and I find that it deciphers to:

IAMTHEVERYMODELOFAMODERNMAJORGENERAL

when I use the keyword MUSIC, it is highly likely that I have found both the key and the unenciphered text.

This concept is formalized in the idea of unicity distance: invented by Claude Shannon in the 1940s. Unicity distance describes the amount of ciphertext that we must have in order to be confident that we have found the right plaintext. This is a function of two things: the entropy of the plaintext message (something written in proper English is far less random than a phone number) and the length of the key being used for encryption.

To calculate the unicity distance for a mesage written in English, divide the length of the key in bits (say, 128 bits) by 6.8 (which is a measure of the level of redundancy in English). With about eighteen characters of ciphertext, we can be confident that we have found the correct message and not simply one of a number of possibilities, as in the phone number example. By definition, compressed files have redundancy removed; as such, you may want to divide the key length by about 2.5 to get their unicity distance. For truly random data, the level of redundancy is zero therefore the unicity distance is infinite. If I encipher a random number and send it to you, a person who intercepts it will never be able to determine – on the basis of the ciphertext alone – whether they have deciphered it properly.

For many types of data files, the unicity distance is comparable to that in normal English text. This holds for word processor files, spreadsheets, and many databases. Actually, many types of computer files have significantly smaller unicity distances because they have standardized beginnings. If I know that a file sent each morning begins with: “The following the the weather report for…” I can determine very quickly if I have deciphered it correctly.

Actually, the last example is particularly noteworthy. When cryptoanalysts are presented with a piece of ciphertext using a known cipher (say Enigma) and which is known to include a particular string of text (such as the weather report introduction), it can become enormously easier to determine the encryption key being used. These bits of probable texts are called ‘cribs‘ and they played an important role in Allied codebreaking efforts during the Second World War. The use of the German word ‘wetter’ at the same point in messages sent at the same time each day was quite useful for determining what that day’s key was.

Secrets and Lies

Ottawa church

Computer security is an arcane and difficult subject, constantly shifting in response to societal and technological forcings. A layperson hoping to get a better grip on the fundamental issues involved can scarcely do better than to read Bruce Schneier‘s Secrets and Lies: Digital Security in a Networked World. The book is at the middle of the spectrum of his work, with Beyond Fear existing at one end of the spectrum as a general primer on all security related matters and Applied Cryptography providing far more detail than non-experts will ever wish to absorb.

Secrets and Lies takes a systematic approach, describing types of attacks and adversaries, stressing how security is a process rather than a product, and explaining a great many offensive and defences strategies in accessible ways and with telling examples. He stresses the impossibility of preventing all attacks, and hence the importance of maintaining detection and response capabilities. He also demonstrates strong awareness of how security products and procedures interact with the psychology of system designers, attackers, and ordinary users. Most surprisingly, the book is consistently engaging and even entertaining. You would not expect a book on computer security to be so lively.

One critical argument Schneier makes is that the overall security of computing can only increase substantially if vendors become liable for security flaws in their products. When a bridge collapses, the construction and engineering firms end up in court. When a ten year old bug in Windows NT causes millions of dollars in losses for a company losing it, Microsoft may see fit to finally issue a patch. Using regulation to structure incentives to shape behaviour is an approach that works in a huge number of areas. Schneier shows how it can be made to work in computer security.

Average users probably won’t want to read this book – though elements of it would probably entertain and surprise them. Those with an interest in security, whether it is principally in relation to computers or not, should read it mostly because of the quality of Schneier’s though processes and analysis. The bits about technology are quite secondary and pretty easily skimmed. Most people don’t need to know precisely how smart cards or the Windows NT kernel are vulnerable; they need to know what those vulnerabilities mean in the context of how those technologies are used. Reading this book will leave you wiser in relation to an area of ever-growing importance. Those with no special interest in computers are still strongly encouraged to read Beyond Fear: especially if they are legislators working on anti-terrorism laws.

On technology and vulnerability

The first episode of James Burke’s Connections is very thought provoking. It demonstrates the inescapable downside of Adam Smith‘s pin factory: while an assembly line can produce far more pins than individual artisans, each of the assembly line workers becomes unable to produce anything without the industrial network that supports their work.

See this prior entry on Burke’s series

Protecting sources and methods

Rusty metal wall

By now, most people will have read about the Canadian pedophile from Maple Ridge who is being sought in Thailand. The story is a shocking and lamentable one, but I want to concentrate here on the technical aspect. INTERPOL released images of the man, claiming they had undone the Photoshop ‘twirl’ effect that had been used to disguise him initially in compromising photos. While this claim has been widely reported in the media, there is at least some reason to question it. It is also possible that INTERPOL is concealing the fact that it received unaltered photos from another source, which could have been anything from intercepted emails to files recovered from an improperly erased camera memory card. It could even have been recovered from the EXIF metadata thumbnails many cameras produce. It is also possible this particular effect is so easy to reverse (and that the technique is so widely known to exist) that INTERPOL saw no value in keeping their methods secret. A quick Google search suggests that the ‘twist’ effect is a plausible candidate for easy reversal.

Providing an alternative story to explain the source of information is an ancient intelligence tactic. For instance, during the Second World War an imaginary spy ring was created by the British and used to justify how they had some of the information that had actually been obtained through cracked ENIGMA transmissions at Bletchley Park. Some have argued that the Coventry Bombing was known about in advance by British intelligence due to deciphered messages, but they decided not to evacuate the city because they did not want to reveal to the enemy that their ciphers had been compromised. While this particular example may or may not be historically accurate, it illustrates the dilemma of somebody in possession of important intelligence acquired in a sensitive manner.

Cover stories can conceal sources and methods in other ways. A few years ago, it was claimed that Pervez Musharraf had escaped having his motorcade bombed, due to a radio jammer. While that is certainly possible, it seems unlikely that his guards would have reported the existence of the system if it had played such a crucial role. More likely, they got tipped off from an informant in the group responsible, an agent they had implanted in it, or some sort of communication intercept. Given how it is now widely known that email messages and phone calls worldwide are regularly intercepted by governments, I imagine a lot of spies and informants are being protected by false stories about communication intercepts.

In short, it is fair to say that any organization concerned with intelligence gathering will work diligently to protect their sources and methods. After all, these are what ensure their future access to privileged information in the future. While there is a slim chance INTERPOL intentionally revealed their ability to unscramble photographs as some sort of deterrent, it seems unlikely. This situation will simply encourage people to use more aggressive techniques to conceal their faces in the future. It is also possible that, in this case, they felt that getting the man’s image out was more important than protecting their methods. In my opinion, it seems most likely that ‘twist’ really is easy to unscramble and that they saw little value in not publicizing this fact. That said, it remains possible that a more complex collection of tactics and calculations has been applied.

Mac security tips

Gatineau Park, Quebec

During the past twelve months, 23.47% of visits to this blog have been from Mac users. Since there are so many of them out there, I though I would share a few tips on Mac security. Out of the box, OS X does beat Windows XP on security – partly for design reasons and partly because it isn’t as worthwhile to come up with malware that attacks an operating system with a minority of users. Even so, taking some basic precautions is worthwhile. The number one tip is behavioural, rather than technical. Be cautious in the websites and emails you view, the files you download, and the software you install.

Here are more detailed guides from a company called Corsair (which I know nothing about) and from the American National Security Agency (who knew they used Macs?). The first link is specific to Tiger (10.4), while the latter is about the older Panther (10.3). I expect they will both remain largely valid for the upcoming Leopard (10.5).

Some more general advice I wrote earlier: Protecting your computer.

PS. I am curious about the one person in the last orbit who accessed this site using OS/2 Warp, back on February 17th. I hope it was one of the nuns from the ads.

Once more on the importance of backups

As mentioned before, the best defence against data loss from viruses or hardware damage is to make comprehensive, frequent backups. As such, I propose the following rule of thumb:

If a piece of data is worth more than the drive space it occupies, a second copy should exist somewhere else.

Nowadays, you can easily pick up hard drives for less than $1 per gigabyte. At those prices, it probably isn’t just personal photos and messages that are worth saving, but any bulk data (movies, songs, etc) that would take more than $1 per gigabyte in effort to find and download again.

Mac users should consider downloading Carbon Copy Cloner. It produced bootable byte-for-byte copies of entire drives. That means that even if the hard drive in your computer dies completely and irreplaceably, you can actually run your system off an external hard drive, with all the data and functionality it possessed when you made the most recent copy.

One nice perk about having one or more such copies is how they can let you undo mistakes. If you accidentally erased or corrupted an important file, you can go back and grab it. Likewise, if you installed a software update that proved problematic, you can shift you entire system back to an earlier state.

[Update: 22 January 2010] Since I wrote this article, Apple released new versions of OS X with their excellent Time Machine backup software built-in. I strongly encourage all Mac users to take advantage of it.