Liability and computer security

One of the major points of intersection between law and economics is liability. By setting the rules about who can sue brake manufacturers, in what circumstances, and to what extent, lawmakers help to set the incentives for quality control within that industry. By establishing what constitutes negligence in different areas, the law tries to balance efficiency (encouraging cost-effective mitigation on the part of whoever can do it most cheaply) with equity.

I wonder whether this could be used, to some extent, to combat the botnets that have helped to make the internet such a dangerous place. In brief, a botnet consists of ordinary computers that have been taken over by a virus. While they don’t seem to have been altered, from the perspective of users, they can be maliciously employed by remote control to send spam, attack websites, carry out illegal transactions, and so forth. There are millions of such computers, largely because so many unprotected PCs with incautious and ignorant users are connected constantly to broadband connections.

As it stands, there is some chance that an individual computer owner will face legal consequences if their machine is used maliciously in this way. What would be a lot more efficient would be to pass part of the responsibility to internet service providers. That is to say, Internet Service Providers (ISPs) whose networks transmit spam or viruses outwards could be sued by those harmed as a result. These firms have the staff, expertise, and network control. Given the right incentives, they could require users to use up-to-date antivirus software that they would provide. They could also screen incoming and outgoing network traffic for viruses and botnet control signals. They could, in short, become more like the IT department at an office. ISPs with such obligations would then lean on the makers of software and operating systems, forcing them to build more secure products.

As Bruce Schneier has repeatedly argued, hoping to educate users as a means of creating overall security is probably doomed. People don’t have the interest or the incentives to learn and the technology and threats change to quickly. To do a better job of combating them, our strategies should change as well.

Oryx and Crake

Fire truck valves

Margaret Atwood‘s novel, which was short-listed for the Booker Prize, portrays a future characterized by the massive expansion of human capabilities in genetic engineering and biotechnology. As such, it bears some resemblance to Neal Stephenson‘s The Diamond Age, which ponders what massive advances in material science could do, and posits similar stratification by class. Of course, biotechnology is an area more likely to raise ethical hackles and engage with the intuitions people have about what constitutes the ethical use of science.

Atwood does her best to provoke many such thoughts: bringing up food ethics, that of corporations, reproductive ethics, and survivor ethics (the last time period depicted is essentially post-apocalyptic). The degree to which this is brought about by a combination of simple greed, logic limited by one’s own circumstances, and unintended consequences certainly has a plausible feel to it.

The book is well constructed and compelling, obviously the work of someone who is an experienced storyteller. From a technical angle, it is also more plausible than most science fiction. It is difficult to identify any element that is highly likely to be impossible for humanity to ever do, if desired. That, of course, contributes to the chilling effect, as the consequences for some such actions unfold.

All in all, I don’t think the book has a straightforwardly anti-technological bent. It is more a cautionary tale about what can occur in the absence of moral consideration and concomitant regulation. Given how the regulation of biotechnology is such a contemporary issue (stem cells, hybrid embryos, genetic discrimination, etc), Atwood has written something that speaks to some of the more important ethical discussions occurring today.

I recommend the book without reservation, with the warning that readers may find themselves disturbed by how possible it all seems.

Unlocking cars with computers

Back in the day when the original Palm Pilot was a hot new piece of technology, I remember BMW and a number of other car companies started selling cars with a keyless entry system based on an infrared transmitter in a key fob, just like a television remote control. Unfortunately, whatever sort of protocol the system used for authentication was quickly undermined and the Palm Pilot’s infrared transmitter suddenly became a key to all manner of expensive new automobiles.

Something similar has happened again. The KeeLoq system, used in the keyless entry systems of most car manufacturers, has been cracked by computer security researchers. A PDF of their research paper is online. The attack requires about one hour of radio communication with the key, which could be done surreptitiously while the owner is in an office or restaurant. The cryptographic analysis involved takes about a day and produces a ‘master key’ that can actually open a number of different cars. Having collected a large number of such master keys, it would be possible to intercept a single transmission between a key and a car (say, when someone is parking), identify the correct master key, and open the door in seconds. While this will not start the car – and there are certainly other methods available for breaking into one – it does create a risk for theft of objects inside cars in a way that shows no signs of forced entry. In many such cases, claiming insurance compensation is difficult.

Of course, mechanical locks also have their failings. One important difference has to do with relative costs. Making a physical, key-based access control system more secure probably increases the cost for every single unit appreciably. By contrast, improving the cryptography for a system based on an infrared or radio frequency transmission probably involves a one-off software development cost, with negligible additional costs per unit. As such, it is especially surprising that the KeeLoq system is so weak.

Quantum computers and cryptography

Public key cryptography is probably the most significant cryptographic advance since the discovery of the monoalphabetic substitution cipher thousands of years ago. In short, it provides an elegant solution to the problem of key distribution. Normally, two people wishing to exchange encrypted messages must exchange both the message and the key to decrypt it. Sending both over an insecure connection is obviously unsafe and, if you have a safe connection, there is little need for encryption. Based on some fancy math, public key encryption systems let Person A encrypt messages for Person B using only information that Person B can make publicly available (a public key, like mine).

Now, quantum computers running Shor’s algorithm threaten to ruin the party. Two groups claim to have achieved some success. If they manage the trick, the consequences will be very significant, and not just for PGP-using privacy junkies. Public key encryption is also the basis for all the ‘https’ websites where we so happily shop with credit cards. If a fellow in a van outside can sniff the traffic from your wireless network and later decrypt it, buying stuff from eBay and Amazon suddenly becomes a lot less appealing.

Thankfully, quantum computers continue to prove very difficult to build. Of course, some well-funded and sophisticated organization may have been quietly using them for years. After all, the critical WWII codebreaking word at Bletchley Park was only made known publicly 30 years after the war.

For those who want to learn more, I very much recommend Simon Singh’s The Code Book.

Precaution and bats

The ‘precautionary principle’ is frequently invoked in arguments about both security and the environment, but remains enduringly controversial. No matter how it is formulated, it has to do with probabilities and thresholds for action. Sometimes, it is taken to mean that there need not be proof that something is harmful before it is restricted: for instance, in the case of genetically modified foods. Sometimes, it is taken to mean that there need not be proof that something be beneficiail before it is done: for example, with organic foods. Sometimes, it has to do with who gets the benefit of the doubt, in the face of inconclusive or inadequate scientific data.

This article from Orion Magazine provides some interesting discussion of how it pertains to health threats generally, with an anecdote about rabid bats as an illustrative example.

I am not sure if there is all that much of a take home message – other than that people behave inconsistently when presented with risks that might seem similar in simple cost-benefit terms – but the article is an interesting one.

Peering into metal with muons

When cosmic rays collide with molecules in the upper atmosphere, they produce particles called muons. About 10,000 of these strike every square metre of the earth’s surface each minute. These particles are able to penetrate several tens of metres through most materials, but are scattered to an unusual extent by atoms that include large numbers of protons in their nuclei. Since this includes uranium and plutonium, muons could have valuable security applications.

Muon tomography is a form of imaging that can be used to pick out fissile materials, even when they are embedded in dense masses. For instance, a tunnel sized scanner could examine entire semi trucks or shipping containers in a short time. Such tunnels would be lined with gas-filled tubes, each containing a thin wire capable of detecting muons on the basis of a characteristic ionization trail. It is estimated that scans would take 20-60 seconds, and less time for vehicles and objects of a known configuration.

Muons have also been used in more peaceful applications: such as looking for undiscovered chambers in the Pyramids of Giza and examining the interior of Mount Asama Yama, in Japan.

APEC joke motorcade

By now, most people will have heard about the prank pulled off at APEC by the Australian television show The Chaser’s War on Everything. In short, they managed to get a fake motorcade of black cars with Canadian flags on them through two checkpoints and within ten metres of the hotel where President Bush was saying – all this during the heaviest security Sydney has ever seen, and while a man dressed as Osama bin Laden was in one of the cars. Previously, they managed to convince various film studios and embassies in Australia to allow them to bring a huge wooden horse into their secure compounds. Naturally, it was full of sword wielding men in silly period costumes. They also have a series of sketches where men dressed as stereotypical tourists manage to wander into all manner of secure areas, while people in traditional Arab garb get stopped within minutes.

All told, I think this prank is pretty funny. It also demonstrates how a bunch of circling helicopters and huge steel fences aren’t much good for security when the people you hire are muppets and the procedures you employ are half-baked. The fake passes that got them through both the ‘Green Zone’ and more secure ‘Red Zone’ checkpoints are hilarious.

Web abuse

Rideau Canal

Spam is terribly frustrating stuff, partly because of how it is inconvenient and partly because of how it is a cancer that wrecks good things. (See previous: 1, 2, 3) The ideal internet is a place of free and honest communication. Spammers create the need for extensive defenses and scrutiny which take time to maintain and diminish that openness and spontaneity.

If you think the spam in your email inbox is bad, just consider yourself lucky that you do not also have to deal with comment and trackback spam on two blogs, a wiki, YouTube videos, and a half dozen secondary places. There are even phony marketing bots on Facebook now: keep your eyes peeled for ‘Christine Qian’ and ‘her’ ilk.

In the end, while decentralized approaches to spam management are time consuming and annoying, they are probably better than centralized systems would be. With the latter, there is always the danger of the wholesale manipulation and censorship of what is able to find its way online, or be transmitted across the web.

A closer look at the War Museum controversy

Still pondering the controversy about the display in the Canadian War Museum, I decided to go have a look at it first-hand. On the basis of what I saw, I am even more convinced that the display is fair and balanced, and that it should not be altered in response to pressure from veterans.

Here, you can see the panel in question in its immediate surroundings:

An Enduring Controversy, and surroundings

This is one small part of a large area discussing the air component of the Second World War. A shot with a narrower field of view shows the controversial panel itself more clearly:

Enduring Controversy

Here is a large close-up shot of the panel text. Nearby, a more prominent panel stresses the deaths of Canadian aircrew and the degree to which aerial bombing “damaged essential elements of the German war effort.” This alternative panel is located right at the entrance to this section of the museum.

If anyone wishes to comment to the museum staff, I recommend emailing or calling Dr. Victor Rabinovitch, the President and CEO. His contact information, along with that of other members of the museum directorate, is available on this page.