The atmospheric longevity of carbon dioxide

How long does carbon dioxide emitted by human beings remain in the atmosphere? It turns out, it is a tricky question. Different mechanisms remove carbon at different rates, and the responses of each system to higher concentrations of carbon dioxide in the atmosphere differ.

Probably the most important distinction is between sinks that have a capacity that can be exhausted and those that are effectively limitless. Oceans the biosphere are of the first kind, and they respond to carbon dioxide in the atmosphere relatively quickly. That being said, there is a limit to how much carbon dioxide the ocean can absorb (and the fact that it becomes more acidic while doing so is problematic) and there is only so much biomass the planet can sustain. Weathering rock that absorbs carbon and then subducts below the seafloor is an example of the second type of sink: though it operates very slowly and volcanic eruptions can return carbon that has been locked into the lithosphere back to the atmosphere. Even without such eruptions to worry about, natural weathering is not the route to a stable climate on a human timescale. As the Nature article linked above explains: “it would take hundreds of thousands of years for these processes to bring CO2 levels back to pre-industrial values.”

The article also comments on how long the temperature anomaly from anthropogenic emissions will persist: “whether we emit a lot or a little bit of CO2, temperatures will quickly rise and plateau, dropping by only about 1°C over 12,000 years.” We should make no mistake in understanding that our choices about how much carbon dioxide we emit will have a big impact on a huge number of future generations.

First venture into RAW

The photo above is the first one I ever produced after the fact, using the RAW data from a digital sensor. Given my current suite of software (iPhoto ’08, Photoshop CS, and Canon’s Digital Photo Professional), using RAW is a bit of a pain. iPhoto imports RAW files incorrectly (producing odd black frames), at least when you have your camera set to generate both RAW and JPEG files simultaneously. The Canon EOS utility works, when it comes to getting the .CRU (Canon’s proprietary RAW format) off the camera, but it does so slowly and imports redundant copies of the JPEG files.

All that being said, there are good reasons to put up with the bother. RAW lets you adjust the white balance and exposure far more effectively after the fact than JPEG does, and ultimately represents a far superior digital negative. For now, RAW files may be an awkward annoyance even on my excellent new Mac. In a few years, the storage space and processing power to deal with them will be ubiquitous.

In short, it seems worth shooting RAW+JPEG whenever there is a decent chance you will want to use any photo in an artistic way.

Protocol for post-fire alarm de-evacuation of office towers

When a fire alarm causes the evacuation of an office tower, the evacuation and return are rather disruptive. Part of the problem is the tendency of people to return in random order, once the alarm has stopped. That means the elevators need to stop on a random collection of floors, sometimes to drop off just one or two people, before returning to the ground floor to collect more people from the throng down there.

Everybody would be better off if the throng organized itself in order from lowest to highest floor. People from the second floor could be transported first (assuming they are unwilling to endure one flight of stairs), then people from the third floor would begin moving up. Floor by floor, the entire group would be progressively transported up. That way, each elevator only needs to stop on one non-lobby floor and the total time spent by the group waiting for elevators is minimized.

Of course, as with so many systems designed to optimize the outcome for everyone, there are significant opportunities for selfish behaviour. Someone from the 20th floor could join the second floor crowd, then get a prompt solo ride up once the other people in the car have been dropped off. Countermeasures to prevent cheating could involve social pressure (shunning those who jump the queue) or technical means (restricting the elevators to move up sequentially, floor by floor).

Legit Monty Python becoming available online

In another victory for the internet at large, Monty Python has launched a YouTube channel – providing free access to an increasing number of their videos at reasonably high quality.

For the uninitiated, and those seeking to rekindle their appreciation for all things Python – I offer a few viewing suggestions:

They will be using the channel to try and sell DVDs of their films and television episodes, but that seems very fair.

I look forward to when some of my other favourite sketches become available, such as the Cheese Shop sketch, the ROMANES EUNT DOMUS segment from The Life of Brian, the Crunchy Frog sketch, ‘I Wish to Report a Burglary,’ and the ‘I’d Like to Get Married’ sketch.

Cold, glass, and condensation

Users of cameras and eyeglasses will be familiar with the phenomenon of fogging, which occurs when one goes from a cold and dry place into a warm one. This occurs because air can hold about 7% more water per unit of volume for each ˚C of additional temperature. That means that air in warm places is naturally more laden with water than that in cold ones. When the water-laden air hits cool glass, it condenses into a fog that confounds the bespecktacled and shutterbugs.

The other night, I witnessed a special elaboration of this phenomenon unique to conditions including (a) a very cold and dry night (b) a fairly large volume of glass and (c) an instant transition to a warm and relatively humid coffee shop.

The normal fogging occurred, but it would not dissipate after several minutes of waiting. It was then that I noticed that the glass on which the fog had formed was cold enough to freeze it – leaving a thin sheet of ice of the lens. The remedy was a few minutes of huffing to melt the ice, followed by a few more waiting for evaporation.

I am a bit surprised not to have experienced this working with cameras in Finland or Estonia. Like getting mild frostbite walking home from a party, it seems to be an Ottawa experience.

Canada’s new 90% target for non-GHG emitting electricity

Note: In light of a perceptive comment, I have made some revisions to the post below. In all cases, the old text is struck out.

In yesterday’s Speech from the Throne, the government pledged to increase the share of Canada’s electricity generated from non-emitting sources to 90% by 2020. Looking into the math behind this objective reveals just how ambitious it is. The following numbers are all somewhat approximate, but their precision is not important for revealing the underlying dynamic.

In order to have 90% non-emitting power, you need to have ten nine times more capacity in non-emitting sources like hydroelectricity, nuclear, and renewables than you have in emitting capacity like coal and natural gas plants. Right now, Canada has somewhere around 110 gigawatts (GW) of total installed electrical capacity: 70% of which is non-emitting. Using the following basic equation, we can work out how much non-emitting energy we need in order to reach the 90% objective, based on different scenarios for what happens to the emitting capacity:

0.90 = (gigawatts non-emitting) / (gigawatts non-emitting + gigawatts emitting)

In every case, you have ten nine times more non-emitting (clean) capacity than emitting (dirty) capacity. Therefore, getting to the 90% target while retaining all 33 GW of Canada’s dirty capacity means bumping our clean capacity from 77 GW to 330 297 GW – an increase of 253 220 GW.

To put that in perspective, 253 220 gigawatts is 230% of Canada’s current total electrical generating capacity. 253 220 gigawatts is more than thirty-seven thirty-two times the capacity of the Grand Coulee Dam and is equivalent to more than fourty thirty-five times the output of the Bruce Nuclear Generating Station. 253 220 gigawatts is more than eleven nine times the generating capacity of the Three Gorges Dam.

Things get worse if you expand Canada’s dirty electricity generating capacity. If we were foolish enough to double it, we would need 583 517 GW of new clean energy to achieve the 90% target. Cutting the dirty capacity by 50% from today’s level means we would need to build another 88 71.5 GW of clean capacity. If we cut the dirty capacity by 75%, we would be able to reach the 90% target with no new clean capacity built.

The reasons for all this are intuitive enough. It is like a lever where the arm on one side of the fulcrum is ten nine times longer than the other. If you want to balance out the weight on the long arm (equivalent to the dirty capacity), you need to add an awful lot of weight to the short arm (equivalent to clean capacity).

Of course, all this is rather misleading when considered in the abstract. It’s not as though doubling our dirty capacity would be just fine if we also built 583 517 GW of new dams, wind farms, and nuclear stations. What is important in the end is the total quantity of Canadian emissions: an outcome only partially influenced by the balance between zero-emission and high-emission electricity capacity. The fact that the 90% figure is unaffected by replacing coal plants with superior gas plants also demonstrates how problematic it is as a metric.

The final possibility to mention here is that of carbon capture and storage (CCS). If it proves effective and economical, applying it to existing dirty facilities would be equivalent to switching them into the clean column. Realistically, CCS will probably only ever capture 80-90% of the emissions from any facility it is coupled with. Applying that imperfect technology to a coal-fired behemoth like the Nanticoke Generating Station wouldn’t shift it from the dirty column to the completely clean one, but it would represent a useful chunk of real reduction in the quantity of climate-altering greenhouse gasses Canada is emitting into the atmosphere.

[Update: 8:11pm] For those interested in the numbers on this, please have a look at this post on Tyler Hamilton’s blog and the discussion below it.

The single cheapest way to improve your photography

There seem to be a lot of people out there who are succeeding at producing appealing and artistic images using low-cost photographic equipment. A case in point are the lowest cost Canon point and shoot digital cameras. They cost less than $200, brand new, and yet it is certainly possible to produce museum quality photography with them, if you have enough creativity and awareness of light.

Arguably, the worst thing that ever happened to popular photography was the emergence of the on-camera flash. It has given too many photographers the idea that light doesn’t matter. After all, they have brought along their own tiny flashbulb.

In the great majority of cases, disabling that flash is an excellent first step. The second step – alluded to in the title – is buying yourself a little tripod. Personally, I use a $10 UltraPod mini, kept constantly attached to my $180 A570 IS camera. While everyone else was making hopeless attempts to light up the roof of Notre Dame Cathedral or the Blue Mosque with their on-camera flashes, I was getting decent photos of them by bracing the tripod on walls, the floor, or furniture.

Anyone who is serious about photography with a small camera should buy one.

The virtues of digital photography

While there are certainly benefits to film, there are also many excellent reasons for which people are switching to digital. The sensors in even the low-end digital SLRs have rather good low-light performance. They are less grainy at 1600 ISO than the sensors in point and shoot cameras are at 400 or even 200 ISO. The dSLR systems also include features like depth of field preview, mirror lock-up, and bracketing for both exposure and white balance. Also very useful are dedicated controls for things like white balance, ISO, and exposure compensation. Sure, you can set all those things through menus in most good point and shoot cameras. It is a lot more pleasant to be able to do so on the fly, while still looking through the viewfinder.

As a fan of wide angle lenses, I do find the 1.6X multiplication from small sensors annoying. That being said, dSLRs these days do come with decent kit lenses that include an appropriately altered range. And, of course, there is always the enormous value of being able to take unlimited photos without marginal cost and get immediate feedback on the results of what you are doing. Being able to consult luminosity and RGB histograms half a second after taking the photo certainly beats having to wait for processing and printing.

In short, there are many virtues to digital photography: especially to those of us who are uncertain about there we will be living in the next few years. Just like one’s personal library, shipping around binders of archive-quality negatives is an expense and a pain. Ones and zeros can be zipped around the world at a much lower price, and with less risk to the originals.

The death of film

As amazing as digital single lens reflex (dSLR) cameras have become, it is a bit sad that Canon’s website now includes only one film SLR: the absurdly expensive EOS-1v. Nikon’s page has two: the $2000 F6 and the $350 FM10.

This makes me glad I went ahead and bought an Elan 7N four years ago, while digital bodies were still totally unaffordable. While it lacks the convenience of the digital options, there is still much to be said for film. A cheap roll of Velvia or T-Max can give you better performance than a $5000 digital camera, and negatives are comparatably easy to archive in a way that will endure for fifty or one hundred years. Also, changing the kind of film you use can have a big effect on the kind of photos you produce, and it is a lot easier than buying a new digital sensor with different properties.

No photographic technology ever really dies. There are still artists and enthusiasts who make Daguerreotypes, after all. Film will simply move from being the default medium to one that professionals and hobbyists explicitly select.

For now, people who are interested in getting involved in serious artistic photography should definitely consider the option of picking up a cut-price used film SLR, a bunch of rolls of good film, and some processing and scanning from a good lab. For the price of an entry-level dSLR, you could do a lot of shooting, with equipment that will not be considered any more antiquated in ten years than it is now.

New developments in spam

Remarkably, it seems that 70% of the world’s spam emails were originating from an American firm called McColo. On November 11th, two American internet service providers cut them off from the web, leading to the huge drop in the global volume of spam. It is estimated that 90% of spam messages are actually sent by computers that have been compromised by viruses, which makes it a bit surprising that such a drop could be generated by disconnecting one firm. Clearly, it is a network that needed central direction to operate. Those that emerge as successors will probably be more robust, located in more unpoliced jurisdictions, or both.

While the respite is likely to be temporary, the situation may reveal some useful information on the practice and economics of spam. This unrelated paper (PDF) examines the latter. The researchers infiltrated a segment of the Storm Botnet and monitored its activity and performance. On the basis of what they observed and estimates of the rest, they concluded that the botnet earned about 3.5 million dollars a year by selling pharmaceuticals. While that isn’t an inconsiderable sum, I suspect it is less than is being spent by companies combatting the flood of spam messages themselves.