Online data and death

Perhaps the most unusual WordPress plugin I’ve ever heard of is Next of Kin. According to the plugin’s creator:

It monitors your own visits to your wordpress system, and will send you a warning email after a number of weeks (of your choice) without a visit. If you fail to visit your blog even after that, the system will send a mail you wrote to whoever you choose.

Presumably, the idea is to include the access credentials for your site(s) in the final email.

This raises the more general question of what should happen to web content after a person dies. Facebook pages can be turned into memorials. Blogs can be left up, intentionally taken down, or left to eventually vanish from non-payment or some other hosting change. What is most appropriate generally? What would readers want for themselves?

Email might be the trickiest of all. Most of it is trivial, but some is an important life record. Should any of it ever be passed along to survivors, as a person’s personal correspondence might once have been?

Deployability of nuclear weapons

Being able to build a device that can produce a nuclear explosion is a significant challenge in itself. Also challenging is building such a device in a self-contained way which does not require difficult last-minute assembly, and which can be stored in a usable state for years. The first American bombs certainly did not meet this standard.

Captain William Parsons, a U.S. Navy weapons expert with the 509th Composite Group (the B-29 squadron that dropped the atomic bombs on Japan during WWII) described the complex and hazardous operation, in a letter intended to convince his superiors that dummy devices were required for practice runs:

It is believed fair to compare the assembly of the gun gadget [the uranium bomb] to the normal field assembly of a torpedo, as far as mechanical tests are involved… The case of the implosion gadget [the plutonium bomb] is very different, and is believed comparable in complexity to rebuilding an airplane in the field. Even this does not fully express the difficulty, since much of the assembly involves bare blocks of high explosives and, in all probability, will end with the securing in position of at least thirty-two boosters and detonators, and then connecting these to firing circuits, including special coaxial cables and high voltage condenser circuit… I believe that anyone familiar with advance base operations… would agree that this is the most complex and involved operation which has ever been attempted outside of a confined laboratory and ammunition depot.

Rhodes, Richard. The Making of the Atomic Bomb. p.590 (paperback)

Probably the reason why the bomb had to be so substantially assembled right before use had to do with the initiator – a sub-component at the very centre of the bomb, designed to produce a handful of neutrons at the critical moment to initiate fission. At the same time, it was critical that the initator not produce even a single neutron before the bomb was to be used.

In early American bombs, initiators were apparently comprised of the alpha particle emitter Polonium 210 (half life 138.4 days) sandwiched between metal foils to keep it from reacting prematurely with the beryllium metal nearby. When the high explosive shell wrapped around the natural uranium tamper and plutonium core of the implosion bomb detonated, the components of the initiator would mix and react, producing neutrons at the same time as the explosives were producing compression.

Details on initiators are still classified, so we can only speculate on how the implosion primaries in modern bombs function.

The whole issue of deployability is relevant to questions of nuclear proliferation insofar as it is more difficult to make a stable, battlefield-usable bomb than to make a device capable of generating a nuclear explosion. That being said, many of the technical details of bomb manufacture have been made available to states contemplating the development of nuclear weapons. That has partly been the product of clandestine activities like the operation of the A.Q. Khan proliferation network. It has also been the consequence of states being insufficiently cautious when it comes to safeguarding knowledge, materials, and equipment.

Reforming the IPCC

Alternative title: What to do when everybody ignores you?

In the wake of University of East Anglia email scandal, there has been yet another review of the work of the Intergovernmental Panel on Climate Change (IPCC). This one was chaired by Harold Shapiro, a Princeton University professor, and concluded that “[t]he U.N. climate panel should only make predictions when it has solid evidence and should avoid policy advocacy.”

The IPCC has certainly made some mistakes: issuing some untrue statements, and evaluating some evidence imperfectly. That being said, the details they got wrong were largely of a nitpicky character. The core claims of the IPCC reports – that climate change is real, caused by humans, and dangerous – remain supremely justified. The trouble is, governments aren’t willing to take action on anything like the appropriate scale.

The situation is akin to a doctor giving a patient a diagnosis of cancer, after which the patient decides that he will try to cut down on his consumption of sugary drinks. That might improve the patient’s health a bit, but it is not an adequate response to the problem described. At that point, it would be sensible for the doctor to engage in a bit of ‘policy advocacy’ and stress how the proposed solution is dangerously inadequate.

It can be argued that the IPCC works best when it presents the bare facts and leaves others to make policy decisions. The trouble is, people don’t take the considered opinions of this huge group of scientists sufficiently seriously. They are happy to let crackpots tell them that there is no problem or that no action needs to be taken. While scientists should not be saying: “Here is what your government’s climate change policy should be” they should definitely be saying: “Here are the plausible consequences of the policy you are pursuing now, and they don’t match with the outcomes you say you want to achieve (like avoiding over 2°C of temperature increase)”. They could also very legitimately say: “If you want to avoid handing a transformed world over to future generations, here is the minimum that must be done”. James Hansen accomplishes this task rather well:

Today we are faced with the need to achieve rapid reductions in global fossil fuel emissions and to nearly phase out fossil fuel emissions by the middle of the century. Most governments are saying that they recognize these imperatives. And they say that they will meet these objectives with a Kyoto-like approach. Ladies and gentleman, your governments are lying through their teeth. You may wish to use softer language, but the truth is that they know that their planned approach will not come anywhere near achieving the intended global objectives. Moreover, they are now taking actions that, if we do not stop them, will lock in guaranteed failure to achieve the targets that they have nominally accepted.

Scientists don’t lose their integrity when they present scientific information in a way that policy-makers and citizens can understand. Indeed, it can be argued that they show a lack of integrity when they hide behind technical language that keeps people from grasping the implications of science.

Photo storage costs

At Ottawa’s 2010 Capital Pride festivities, I found myself thinking back to my Oxford days when I would generally only take a couple of hundred photos a month on my 3.2 megapixel digital camera.

By contrast, I took around 400 shots during the course of the parade and the party that followed. Initially, that struck me as a bit excessive and made me nervous. Then it occurred to me that a 4 terabyte external hard drive sells for about $400 these days, meaning that the cost of storing one gigabyte worth of photos is around 20¢ – ten for the external drive, and ten for the internal one it is backing up. The biggest constraint I face is the cost of replacing the 750GB hard drive in my iMac, given that the things really have to be stripped apart for that to be accomplished.

The cost per shot of digital is pretty amazing, compared with film. Of course, there is a new danger that accompanies that. With big memory cards and high speed internet connections, you risk putting more photos online than your friends or readers would ever wish to see.

iTunes artist bug

Here’s an odd iTunes bug. Sometimes when you import a CD, the tracks get copied to your iTunes Library and onto your iPod/iPhone when you sync it. Oddly, the albums are not accessible through the ‘Artists’ list in either iTunes itself or on an iPod.

The problem results when iTunes inappropriately labels tracks as ‘part of a compilation’.

To fix it, open iTunes and select the problematic tracks. Right click on them and select ‘Get info’. From the window that comes up, choose the ‘Info’ tab. There, select the checkbox beside ‘Part of a compilation’ and select ‘No’ from the dropdown menu.

You must then re-synchronize your iPod/iPhone.

This bug is present in iTunes 9.1.1 and possibly other versions.

Climate and the timing of emissions

Climatologist James Hansen emphatically argues that cumulative emissions are what really matter – how much warming the planet experiences depends on what proportion of the world’s fossil fuels get burned.

One reason for this is the long lifetime of CO2 in the atmosphere, with much of it remaining after thousands of years. That being said, the model simulation I have seen shows concentrations dropping sharply, and then tapering off with time:

It seems like it would be helpful to put together that chart with this one, showing historical and expected CO2 concentration increases:

Atmospheric concentration of CO2

A combined chart on the same scale would illustrate what would happen to CO2 concentrations if we stopped emitting at some point soon, specifically what the next few decades would look like.

It seems at least logically possible that timing of emissions could matter. Imagine, for instance, that having emissions cross a certain concentration threshold would really matter. If so, spreading out human emissions so that absorption of CO2 by the oceans would keep the concentration below that cap could be quite beneficial.

It seems an important question to sort out, given how the whole BuryCoal project is focused on limiting total human emissions, rather than trying to space them out.

“Don’t be evil”

The above, famously, is Google’s motto. When I first saw it, it seemed like an embodiment of the ways in which Google differs from other large corporations. They are involved in charitable works, in areas including infectious disease and renewable energy. Furthermore, they give away most of their products, getting the financing from those famous automatic ads.

On further reflection, however, “Don’t be evil” isn’t some lofty, laudible goal we should applaud Google for having. Rather, it is the absolute minimum required of them, given just how much of our personal information they have acquired. Think about GMail: many of us have tens of thousands of messages, many of them highly personal, entrusted unencrypted to Google’s servers. If they were evil – or even a few of their employees were – they could embarass or blackmail an enormous number of people. What Google has is, in many cases, far more intimate than what sites like Facebook do. Facebook may have some private messages to your friends, but Google is likely to have financial information, medical test results, photos you would never put on Facebook, etc.

Now, Google has incorporated a very useful phone calling system into GMail. Install a plugin, and you can make free calls to anywhere in Canada and the United States. In my limited experience, it seems to work better than SkypeOut, while being free to boot. Of course, it is another example where we really need to trust Google to behave ethically. For Google Voice, they already developed algorithms to convert spoken words into transcribed text. Users of their phone service need to trust that their conversations are not being archived or – if they are – that the transcripts will not be used in any nefarious ways.

In short, Google must avoid being evil not out of benevolence, but because their whole business model requires people to view them that way. So far, their products have been remarkably empowering for a huge number of people (any other sort of email seems deeply inferior, after using GMail). If they are going to maintian the trust of users, however, they are going to need to avoid privacy disasters, or at least keep them on a pretty minor scale, like when Google Buzz abruptly let all your friends know who else you are in contact with.

How much can one person steal?

Perhaps one of the reasons why intellectual property law is in such a strange state now is because of how much the sheer value a single person can steal has increased.

The most a human being has ever lifted (briefly) during Olympic weightlifting was 263.5 kg, lifted by Hossein Rezazadeh at the 2004 Summer Olympics. Right now, the price of gold is about US$1,300 an ounce for Canadian Gold Maple Leaf coins. That means the world weight lifting record (or just under 8500 Troy ounces) comprised about C$12 million worth of gold.

Compare that with the losses potentially associated with a book or DVD getting pirated early, or a pharmaceutical manufacturing process getting released to a generic drug manufacturer, and it seems clear that the value in goods that a person can now steal is substantially higher. I remember one memorable illustration of this in fiction, from Jurassic Park. In it, corporate spy Dennis Nedry tries to steal 15 dinosaur embryos, developed as the result of painstaking genetic reconstruction undertaken by his employers. He is offered something like $1.5 million for these (I don’t remember exactly how much), but they were surely worth more to both his employer and to whoever was trying to acquire them.

Lots of other pieces of fiction focus on the fate of valuable intangible commodities. For instance, in William Gibson’s Neuromancer, the principal thing being stolen (at considerable difficulty and loss of life) was three musical notes, which in turn served as a control on a computer system.

When people are stealing gold, or diamonds, or cattle, or DVD players there is a fairly set limit to how much they can actually make off with. Furthermore, after such thieves are caught, there is a good chance that much or all of their loot can be restored to its rightful owners. Compare that to some savvy teenager who comes across a valuable bit of information and publishes it online: the value is potentially enormous, and the scope for ‘setting things right’ pretty much non-existent. Of course, locking up grandmothers whose computers have been used to download a Lady Gaga song or two isn’t a sensible thing to do, regardless.

The Pleasure of Finding Things Out

Probably the most problematic thing about writing associated with Richard Feynman is repetition. Both his books and books about him tend to be at least quasi-biographical, and often feature the same stories, examples, explanations, and even bits of writing.

The Pleasure of Finding Things Out certainly suffers from this flaw, at least for those who have read one or two Feynman books before. It includes, for instance, his appendix to the Challenger inquiry report, which also formed a major part of What Do You Care What Other People Think. It also features Feynman’s thoughts on ‘cargo cult science’ which have been reproduced elsewhere.

All that said, the book does contain some interesting materials that do not seem to be widely available elsewhere, particularly on the subject of nanotechnology. Going back to first principles, Feynman considers what lower size limits exist for things like motors, computer processors, and data storage systems. He concludes that there is ‘plenty of room at the bottom’ and thus enormous scope for improving our capabilities in computing and other fields by relying upon very small machinery and techniques like self-assembly.

Torpedoes, Pearl Harbor, and the atomic bomb

One of the most interesting things about Richard Rhodes’ detailed history of the making of the atomic bomb is the way it gives the reader a better sense of context. This is especially true when it comes to things happening in very different places and spheres of life. It would take an unusual facility with dates, for instance, to realize how the timeline of research into the abstract physical questions about the nature of atoms lined up with political, economic, and military developments.

One grim but interesting example comes from the end of Chapter 12. In November 1941, Franklin Delano Roosevelt had just committed the United States to the serious pursuit of an atomic bomb based upon enriched uranium (U235) and three methods for producing the substance were to be attempted: gaseous diffusion, electromagnetic separation, and centrifuges (the approach Iran is using now). On December 7th of that year, the Japanese Navy attacked the American base at Pearl Harbor.

Rhodes describes how Japanese research into atomic weapons began with the personal research of the director of the Aviation Technology Research Institute of the Imperial Japanese Army – Takeo Yasuda – in 1938, and expanded into a full report on the possible consequences of nuclear fission in April 1940. Rhodes also describes a somewhat grim coincidence involving Japan, the United States, and atomic weapons. He describes how ordinary torpedoes would not have worked for the Pearl Harbor attack, because the water was insufficiently deep. As such, the torpedoes used had to be modified with a stabilizer fin and produced in sufficient quantity for the pre-emptive strike to be successful:

Only thirty of the modified weapons could be promised by October 15, another fifty by the end of the month and the last hundred on November 30, after the task force was scheduled to sail.

The manufacturer did better. Realizing the weapons were vital to a secret program of unprecedented importance, manager Yukiro Fukuda bent company rules, drove his lathe and assembly crews overtime and delivered the last of the 180 specially modified torpedoes by November 17. Mitsubishi Munitions contributed decisively to the success of the first massive surprise blow of the Pacific War by the patriotic effort of its torpedo factory in Kyushu, the southernmost Japanese island, three miles up the Urakami River from the bay in the old port city of Nagasaki. (p.393 paperback)

That attack – launched partly in response to the American embargo of aviation fuel, steel, and iron going into Japan – sank, capsized, or damaged eight battleships, three light cruisers, three destroyers, and four other ships. The two waves also destroyed or damaged 292 aircraft, and killed 2,403 Americans, while wounding another 1,178. More than 1,000 people were killed just in the sinking of the U.S.S Arizona.