Privacy and Facebook applications

I have mentioned Facebook and the expectation of privacy before. Now, the blog of the Canadian privacy commissioner is highlighting one of the risks. Because third party applications have access to both the data of those who install them and the friends of those who have them installed, they can be used to surreptitiously collect information from those in the latter group. While this widens the scope of what third party applications can do, it also seriously undermines the much-trumpeted new privacy features in the Facebook platform.

It just goes to reinforce what I said before: you should expect that anything you post on Facebook is (a) accessible to anyone who wants to see it and (b) likely to remain available online indefinitely. The same goes for most information that is published somewhere online, including on servers you operate yourself.

The Fischer-Tropsch process

Emily Horn and the sunset

Those hoping to understand energy politics in the coming decades would be well advised to read up on the Fischer-Tropsch process. This chemical process uses catalysts to convert carbon monoxide and hydrogen into liquid hydrocarbons. Basically, it allows you to make gasoline using any of a large number of inputs as a feedstock. If the input you use is coal, this process is environmentally disastrous. It combines all the carbon emissions associated with coal burnings with extra energy use for synthetic fuel manufacture, not to mention the ecological and human health effects of coal mining. If the feedstock is biomass, it is possible that it could be a relatively benign way to produce liquid fuels for transport.

The process was developed in Germany during the interwar period and used to produce synthetic fuels during WWII. The fact that it can reduce military dependence on imported fuel is appealing to any state that wants to retain or enhance its military capacity, but feels threatened by the need to import hydrocarbons. The US Air Force has shown considerable interest for precisely that reason, though they are hoping to convert domestic coal or natural gas into jet fuel – an approach that has no environmental benefits. By contrast, biomass-to-liquids offers the possibility of carbon neutral fuels. All the carbon emitted by the fuel was absorbed quite recently by the plants from which it was made.

Such fuels are extremely unlikely to ever be as cheap as gasoline and kerosene – even with today’s oil prices. The fact that there are parts of the world where you can just make a hole in the ground and watch oil spray out ensures that. That said, Fisher-Tropsch-generated fuels could play an important part in a low-carbon future, provided three conditions are met: (a) the fuels are produced from biomass, not coal or natural gas (b) the energy used in the production process comes from sustainable low-carbon sources and (c) the process of growing the biomass is not unacceptably harmful in other ways. If land is redirected towards growing biomass in a way that encourages deforestation or starves the poor, we will not be able to legitimately claim that synthetic fuels are a solution.

Digital camera noise signatures

I previously mentioned the possibility that jpeg metadata could cause problems with your cropping, revealing sections of photos that you did not want to make public. Another risk that people should be aware of relates to the particular ‘signatures’ of the digital sensors inside cameras:

If you take enough images with your digital camera, they can all be compared together and a unique signature can be determined. This means that even when you think that you are posting a photo anonymously to the internet, you are actually providing clues for the government to better tell who you are. The larger the sample size of images they have, the easier it is them to track down images coming from the same camera. Once they know all the images are coming from the same camera, all they then have to do is find that camera and take a picture to confirm it beyond a reasonable doubt.

The possible implications are considerable. This technique could be used in crime fighting, though also in tracking down human rights campaigners and other enemies of oppressive states. While the linked page lists some techniques for removing the tell-tale signs, there is no guarantee they will work against any particular agency or individual who is trying to link a bunch of photos to one camera or photographer.

The take-home lesson is that anonymity is very hard in a world where so many tools can be used to puncture it.

Some useful patterns in English

Rusty connector

By about 1300 CE, Arabic cryptographers had determined that you can decipher messages in which one letter has been replaced by another letter, number, or symbol by exploiting statistical characteristics of the underlying language. Here are some especially useful patterns in English.

  1. E is by far the most common letter – representing about 1/8th of normal text.
  2. If you list the alphabet from most to least commonly used, it divides into four groups.
  3. The highest frequency group includes: e, t, a, o, n, i, r, s, and h.
  4. The middle frequency group includes: d, l, u, c, and m.
  5. Less common are p, f, y, w, g, b, and v.
  6. The lowest frequency group includes: j, k, q, x, and z.
  7. E associates most widely with other letters: appearing before or after virtually all of them, in different circumstances.
  8. Among combinations of a, i, and o io is the most common combination. Ia is the second most common. Ae is rarest.
  9. 80% of the time, n is preceded by a vowel.
  10. 90% of the time, h appears before vowels.
  11. R tends to appear with vowels; s tends to appear with consonants.
  12. The most common repeated letters are ss, ee, tt, ff, ll, mm and oo.

Naturally, there are thousands more such patterns. Even understanding a few can help in deciphering messages that have had a basic substitution cipher applied.

Here’s one to try out:

LKCLHQBCKDRCPQQBDKAPZULSQUCDK
AZRDTDGPCOTZKQDPQBZQDQZHHLOIP
XLSVDQBZAOCZQICZGLHQDJCQLOCZI
QBDKAPQBZQDKQCOCPQXLSDKXLSOPM
ZOCQDJCSKHLOQSKZQCGXLQQZVZDPO
CGZQDTCGXMLLOGXMOLTDICIHLOVDQ
BRZHCPQLLJZKXLHQBCJRGLPCNSDQC
CZOGXDKQBCCTCKDKA

One hint is that cipher alphabets are not always entirely random. The tools on this page are useful for cracking monoalphabetic substitution ciphers.

Odds guessing results

Thanks in a large part to Zoom (of Knitnut.net), I have received 54 valid responses to my odds guessing experiment. As those who read the explanation already know, the point of the experiment was to assess how people assess the relative risks of a vague but more probable outcomes versus a concrete but less likely one. The vague result (1,000 deaths from flooding somewhere in the United States this year) was assigned to ‘heads.’ The precise result (1,000 deaths from Florida hurricane induced flooding) was assigned ‘tails.’

The first result to note is the very wide disparity of answers. Responses for ‘heads’ ranged from 0.005% all the way up to 90%. Responses for ‘tails’ ran from 0% to 75%. Given that there has been no flood in American history that killed 1,000 people, it seems fair to say that most guesses are overestimates. That said, the point of the experiment was to judge the relative responses in the two cases, not the absolute accuracy of the responses. This scatterplot shows the complete set of responses for both questions.

The mean probability estimate for ‘heads’ was 19.3%, while that for ‘tails’ was 23.8%. Because there were a large number of very high and very low guesses, it is probably better to look at descriptive statistics that aren’t influenced by outliers. This boxplot shows the mean, first and third quartile, maximum, and minimum results for each. To understand box plots, imagine that all the people who guessed are made to stand in a line, ranked from highest to lowest guess. Each of the numbers described previously (quartiles, etc) correspond to a position in the line. To find something like the median, you locate the person in the very middle of the line, then take their guess as your number. The advantage of doing this is that it prevents people who guessed very high from dragging the estimate up (as happens with the mean, or average), and doing the same with those who guessed very low.

The yellow triangle is the median. For ‘heads’ the median was 7.5%, compared to 10% for tails. The gray boxes show the range of guesses made by half the sample. At the top is the guess made by the person 3/4 of the way up the line, and at the bottom is the one made by the person 3/4 of the way down the line. As you can see, the bottom half ot the range looks pretty similar. Half of people estimate that the risk of both the ‘heads’ and ‘tails’ outcome is between about 10% and about 0%. What differs most about the two distributions is the upper portion of the grey boxes. Whereas 75% of respondents thought the ‘heads’ option was less than 30% probable, that value was more like 40% for the ‘tails’ option.

A couple of problems exist with this experimental design. Among the 54 ‘coin tosses,’ 63% seem to have come up heads. While it is entirely possible that this is the result of fair throws, I think there is at least some chance that people just chose ‘randomly’ in their heads, in a way that favoured heads over tails. Another problem is that some people might have looked at the comments made by others before guessing, or may even have searched online for information about flooding probabilities.

In conclusion, I would say the experiment provides weak support for my hypothesis. It is undeniably the case that the ‘heads’ option is more likely than the ‘tails’ option, and yet both the mean and median probability assigned to ‘tails’ is higher. There are also significantly more people who assigned ‘tails’ a risk of over 10%.

Those wanting to do some tinkering of their own can download the data in an Excel spreadsheet.

[Update: 28 April 2008] There has been some debate about the point above about the slight heads-bias in the results. I am told that the odds of this outcome are one in 26.3. Whether random chance or a systemic bias better explains that, I will leave to the interpretation of readers. In any event, it only really matters if the ‘heads’ group and ‘tails’ group differed in terms of their natural perception of risk.

The Aragorn Fallacy

Stencil chicken

Watching films, I find myself very frequently annoyed with what I shall call The Aragorn Fallacy. The essence of the fallacy is to equate importance with invulnerability, especially in the face of random events.

Consider a battle that employs swords, spears, and bows and arrows. To some extent, your skill reduces the likelihood of getting killed with a sword (unless you are among the unfortunate individuals who find their line pressed into a line of swordsmen). No conceivable battlefield skill makes you less vulnerable to arrows (or bullets) once you are in the field of fire. As such, mighty King Aragorn is just as likely to be shot and killed as some forcibly drafted peasant hefting a spear for the first time. Sensible military leaders realize that their role is not to serve as cannon fodder, and that they needlessly waste their own lives and those of their men by putting themselves in such positions.

Of course, people will object, there have been military leaders who ‘led from the front,’ put themselves at points of great danger, and went on to high achievement. The problem with this view is that it completely ignores all the young would-be Rommels and Nelsons and Pattons who got felled as young captains or lieutenants by a stray bit of shrapnel or gangrene in a wound produced by a stray bit of barbed wire. With a sufficiently large starting population, you will always end up with examples of people who were reckless but nonetheless survived and thrived. The foolish conclusion to draw from this is that recklessness is either justified or likely to produce success.

Clearly, storytelling and life are different things. We admire superhuman heroes who shake off bullets and arrows like awkward drops of water. We may rationally accept that nonsense like throwing all your best commanders into the front line of a battle is strictly for the movies. The fallacy here is less that we believe these things to be true, and more that we feel them to be excellent. The grim fact that war is a brutal and largely random business sits poorly with our general affection for the things.

Odds guessing experiment

One of the subtle pleasures associated with reading this blog is the occasional opportunity to be experimented upon. Today is such a day.

Instructions:

  1. Read all these instructions before actually completing step two.
  2. Flip a coin.
  3. Please actually flip a coin. People who choose ‘randomly’ in their heads do not actually pick heads and tails equally. If you don’t have a coin use this online tool.
  4. If it landed heads, click here.
  5. If it landed tails, click here.
  6. When you click one of the links above, you will see a description of an event.
  7. Before looking at the comments below, estimate the probability of the event you see described happening in the next year.
  8. Write that as a comment, indicating whether you are answering the heads question or the tails question.

When you are done, you are naturally free to read the other question and the comments left by others.

Even if you don’t normally comment, please do so in this case. I want to get enough responses to permit a statistical comparison.

Thermonuclear weapon design

A common misunderstanding about thermonuclear weapons (those that employ tritium-deuterium fusion as well as the fission of uranium or plutonium) is that most of the extra energy produced comes from fusion. In fact, the great majority comes from additional fission encouraged by neutrons produced by the fusion reaction. Each atom that undergoes fission generates 180 million electron volts (MeV) of energy, equivalent to 74 terajoules per kilogram. Tritium-deuterium fusion produces only 17.6 MeV per incident, though the materials that undergo fusion are far less massive than those that undergo fission.

The general functioning of a modern thermonuclear bomb (Teller-Ulam configuration) is something like the following:

  1. A neutron generator bombards the plutonium pit of the primary (fission device).
  2. Exploding-bridgewire or slapper detonators initiate the high explosive shell around the pit.
  3. The pit is compressed to a supercritical density.
  4. The pit undergoes nuclear fission, aided by the neutron reflecting properties of a shell made of beryllium, or a material with similar neutron-reflection properties.
  5. The fission process in the primary is ‘boosted’ by the fusion of tritium-deuterium gas contained in a hollow chamber within the plutonium.
  6. The x-rays produced by the primary are directed toward the secondary through an interphase material.
  7. Within the secondary, heat and compression from the primary induce the production of tritium from lithium deuteride.
  8. Tritium and deuterium fuse, producing energy and high-energy neutrons.
  9. Those neutrons help induce fusion within a uranium-235 pit within the secondary (called the spark plug). Layers of uranium-235 may alternate with layers of lithium deuteride, and the whole secondary may be encased in a sphere of uranium-235 or 238. This tamper holds the secondary together during fission and fusion. Uranium-235 or 238 will also undergo fission in the presence of neutrons from fusion.

Throughout this process, the whole device is held together by a uranium-238 (depleted uranium) case. This is to ensure that the reactions proceed as far as possible before the whole physics package is blasted apart.

One important security feature can be built into the detonators that set off the explosive shell around the primary. By giving each detonator a fuse with a precisely set random delay, it is possible to ensure that only those who know the timing of each detonator can cause the bomb to explode as designed. If the detonators do not fire in a very precisely coordinated way, the result is likely to be the liquefaction of the plutonium core, followed by it being forced out of the casing as a fountain of liquid metal. Nasty as that would be, it is better than the unauthorized detonation of the weapon.

The detonators are also an important safety feature since their ability to cause very stable explosives to detonate means that the high explosive shell can be made of something that doesn’t detonate easily when exposed to shock or heat. That is an especially valuable feature in a world where bombs are sometimes held inside crashing planes, and where fires on submarines can prove impossible to control.

“The Environment: A Cleaner, Safer, Healthier America”

Milan Ilnyckyj on the Alexandra Bridge, Ottawa

A book I am reading at present – Joseph Romm‘s Hell and High Water – drew my attention to an essay on climate change written by Frank Luntz, a political consultant who worked to oppose the regulation of greenhouse gasses.

The leaked memo, entitled “The Environment: A Cleaner, Safer, Healthier America,” provides a glimpse into the strategies of climate delayers that is both informative and chilling:

“The scientific debate is closing [against us] but not yet closed. There is still a window of opportunity to challenge the science…

Voters believe that there is no consensus about global warming within the scientific community. Should the public come to believe that the scientific issues are settled, their views about global warming will change accordingly…

Therefore, you need to continue to make the lack of scientific certainty a primary issue in the debate.”

The cynicism of it all is astounding. To see something as vital as climate change treated as a superficial, partisan rhetorical battle is extremely dispiriting.

The actual document is also oddly unavailable online. I had to use the Wayback Machine to find a PDF of the original leaked document. I am hosting it on my own server to aid people in locating it in the future. Clearly, I cannot vouch for its veracity personally. That said, articles in The Guardian and on George Monbiot’s site accept the document as genuine.

Telecom immunity and the rule of law

Black lagoon pinball machine

A recent article in Slate discusses how legal policy in the United States should be fixed in the post-Bush era. There are many things in it with which I wholeheartedly disagree. Perhaps the most egregious case is in relation to providing immunity to telecom firms that carried out illegal wiretaps for the administration. Jack Goldsmith argues:

Private-industry cooperation with government is vital to finding and tracking terrorists. If telecoms are punished for their good-faith reliance on executive-branch representations, they will not help the government except when clearly compelled to do so by law. Only full immunity, including retroactive immunity, will guarantee full cooperation.

I think the bigger danger here is providing a precedent that firms can break the law when asked by the administration, then bailed out afterwards. Only fear of prosecution is likely to make firms obey the law in the first place. Providing immunity would invalidate the concept of the rule of law, and open the door to more illegal actions carried out by the executive branch. “Full cooperation” is precisely what we do not want to encourage.

If government wants to intercept the communication of private individuals, it must be a policy adopted through the due course of law. People need to know what it involves (though not necessarily the details of exactly how it works), who supported it, and how those supporters justified the choice. Greater security from terrorism at the cost of a more opaque and lawless state is not a good tradeoff. Company bosses should fear that they will be the ones in the dock when evidence emerges of their engaging in criminal acts, regardless of who asked them to do so. The alternative is more dangerous than the plots that warrantless wiretapping sought to foil.