Canada’s anti-superbug initiative

Geodesic domes at Winterlude

Canada’s federal government is launching an initiative to combat antibiotic resistant bacteria. This is a very sensible thing to do, given how bacterial evolution is creating resistant strains at a higher rate than the one at which we are inventing new antibiotics. MRSA and its relatives could well signal a return to a world in which morbidity and mortality from bacterial illness start shifting back towards the levels prevalent before antibiotics were widely available.

We largely have ourselves to blame for the existence of these bugs. Every time a doctor prescribes unnecessary antibiotics in order to get a patient out of their office, we give them another chance to get stronger. The same goes for when a patient stops taking an antibiotic prescription when they feel better, rather than when it runs out, potentially leaving a few of the most resistant bugs behind to infect others. The same is true for all the ‘antibacterial’ soaps and cleaning products out there. Putting triclosan in soap is pretty poor prioritization. Outside the body, it makes the most sense to kill bugs with things they cannot evolve resistance to: like alcohol or bleach. Using the precious chemicals that kill them but not us to clean countertops is just bad thinking. Finally, there is the antibiotic-factory farming connection discussed extensively here before.

The federal plan involves a number of prudent steps, many of them specifically targeted to MRSA and Clostridium difficile. These include more active patient screening, better sanitization of hospital rooms, use of prophylactics like gloves and masks, and the isolation of patients with resistant strains. Given that there were 13,458 MRSA infections in Ontario hospitals in 2006, it seems that such an initiative is overdue. It would be exceedingly tragic if we comprehensively undermined one of the greatest discoveries in the history of medicine through carelessness and neglect.

SpaceShipTwo

Mailboxes

Virgin Galactic – Richard Branson’s space company – has released the design of its next generation craft: SpaceShipTwo. The machine will carry passengers into the upper atmosphere after being carried to an altitude of about 15km by a larger mothership. After spending time at 110km of altitude, the vehicle will re-enter the atmosphere. While the technology is new and doubtless interesting, there is good reason to ask whether it serves any valuable purpose.

The three aims commonly described for the technology are delivering extremely urgent packages, launching small satellites, and entertaining rich people. While it can certainly be argued that manned spaceflight has not generally been a valuable undertaking, this sort of rollercoaster ride does seem like an especially trivial use of technology. For about $200,000, you get a few minutes in microgravity, the view out the windows, and bragging rights thereafter. Satellite launching could be a lot more useful, though the Virgin group has yet to demonstrate the capability of their vehicles to do so – a situation that applies equally to the idea of making 90 minute deliveries anywhere in the world.

The Economist provides an especially laughable justification for the whole undertaking, arguing:

When space becomes a democracy—or, at least, a plutocracy—the rich risk-takers who have seen the fragile Earth from above might form an influential cohort of environmental activists. Those cynics who look at SpaceShipTwo and think only of the greenhouse gases it is emitting may yet be in for a surprise.

Hopefully, they won’t become ‘environmental activists’ of the Richard Branson variety: investing in airplanes and gratuitous spacecraft while hoping someone will develop a machine that will somehow address the emissions generated.

Common descent and biochemistry

Steam pipes in snow

Despite the dizzying array of life on Earth – if you doubt that, watch the BBC’s excellent Planet Earth series – there is a remarkable degree of biochemical consistency between all living things. This is one of the strongest arguments in favour of common descent: the idea that all living things are descended from the first replicators so evocatively described in the opening chapter of Dawkins’ The Selfish Gene. The very strongest evidence of that thesis comes not from the universality of the really essential mechanisms of life, but from the universality of arbitrary conventions common to all living things.

Some of the more astonishing elements of life are universal: the storage of constitutive information in strands of DNA or RNA, the use of three nucleotide codons to refer to amino acids, and the dominant role of proteins in cellular architecture. These are common to animals and plants, fungi and bacteria and archaea. It is difficult to imagine how living things would look if they were based on alternatives to this basic system. Then, there are elements of common biology which need to be in place, but are somewhat arbitrary. For example, there is the metabolization of glucose for energy and the use of adenosite triphosphate (ATP) as an energy carrier. Something needs to play these roles, but there are presumably other molecules that could serve the purpose. Also, unlike the consistencies in the first category, life would not be staggeringly different if different molecules served these purposes. Finally, there are what might be considered arbitrary conventions – things that were established at the origin of life, are common to all life, but which could just as well be another way or a patchwork of different ways. This includes the use of only 20 amino acids to make proteins, and the fact that the L-isomers of these acids are used. This also includes how cells establish a lower concentration of sodium inside themselves than exists in the surrounding matter, with a higher concentration of potassium inside. It could just as well have been the other way.

In a sense, it is the third category that provides the best evidence of common descent. It is like language: pretty much any language will need a way to refer to objects and to actions performed upon them. As such, the inclusion of these aspects in different languages isn’t really evidence of relation. When you find a language that has a number of arbitrary conventions in common with another (say, an alphabet), you have more reason to think they both evolved from something older.

While statistics suggests that it is highly likely, it would nonetheless be rather thrilling to find life that emerged entirely independently, somewhere out among other planets or distant stars.

Australia’s geothermal potential

Docks near Lonsdale Quay

For a country using 83% coal to power an economy that produces 25.9 tonnes of carbon dioxide equivalent per person, Australia’s Innamincka desert could prove a blessing. This is not because of the sunshine hitting it, but because of the way geothermal energy has suffused the granite under it.

Initial tests have found that the granite is at approximately 250˚C, meaning that each cubic kilometre can yield as much energy as 40 million barrels of oil. If it proves viable to use this heat to boil water and drive turbines, the share of Australian power derived from renewables could increase considerably. According to Emeritus Professor John Veevers of Macquarie University’s Department of Earth and Planetary Sciences, the rocks could “supply, without emissions, the baseload electrical power at current levels of all consumers in Australia for 70 years.”

Naturally, it is not sufficient to just have hot stones within a drillable distance. It will have to be economical to construct the power generation equipment. There will be a need for water to use as a heat carrier. Finally, it will be necessary to build transmission capacity to link new facilities with Australian cities.

In a sense, a geological formation like this is like the oil sands in reverse. Both exist in large countries with economies that depend to a considerable degree on primary commodities. Likewise, both exist in states with shockingly high per-capita greenhouse gas emissions. There are questions about commercial viability and water usage of both projects, but the broader issue with Innamincka is how many megatonnes of carbon dioxide can be kept out of the atmosphere, rather than how much will be produced through a bituminous bonanza.

Improving energy efficiency through very smart metering

Milan Ilnyckyj

With existing technology, it is entirely possible to build houses that allow their owners to be dramatically more energy aware. For instance, it would be relatively easy to build electrical sockets connected to a house network. It could then be possible to see graphically or numerically how much power is being drawn by each socket. It would also be easy to isolate the energy use of major appliances – furnaces, dish washers, refrigerators – thus allowing people to make more intelligent choices about the use and possible replacement of such devices. In an extreme case, you could have a constantly updating spreadsheet identifying every use of power, the level being drawn, the cost associated, and historical patterns of usage.

Being able to manage electrical usage through a web interface could also be very helpful. People could transfer some of their use of power to low-demand times of the day. They could also lower the temperature in houses and have it rise in time to be comfortable by the time they got home. Such controls would also be very useful to people who have some sort of home generating capacity, such as an array of solar panels. A web interface could provide real-time information on the level of energy being produced and the quantity stored.

While all of these things are entirely possible, there do seem to be two big barriers to implementation. The first is in convincing people to install such systems in new houses or while retrofitting houses. The second is to make the systems intuitive enough that non-technical people can use them pretty well. The first of those obstacles would be partially overcome through building codes and carbon pricing. The second is mostly a matter of designing good interfaces. Perhaps an Apple iHome is in order.

Google’s commitment to renewables

Hilary McNaughton

Google.org – the philanthropic arm of the internet search giant – is seeking to use the cognitive and financial resources of its parent to improve the world. Google has promised to eventually fund the organization using 1% of its equity, profit, and employee time. The real question is whether they will prove able to leverage their particular advantages and achieve outcomes of real significance. There is much reason to hope that they will.

From an environmental perspective, the awkwardly named “RE<C” initiative is the most exciting. The goal is to “develop electricity from renewable energy sources that is cheaper than electricity produced from coal… producing one gigawatt of renewable energy capacity – enough to power a city the size of San Francisco – in years, not decades.” This is certainly an ambitious undertaking. One reason for that is because the true price of coal is not being paid: all the environmental pollution associated with coal mining and burning is being left off the balance sheet, at least in America. If Google can produce renewable technologies that outperform coal economically even in the absence of carbon pricing, it will start to look feasible to begin dismantling the global fossil fuel economy.

It is probably fair to say that meeting this goal would be a more significant contribution to human welfare than everything Google has done so far. Here’s hoping all those brains and dollars come together brilliantly. Of course, as much as we might hope for such a technological rescue, it’s not something to bet on. Even in the absence of breakthrough technologies in renewables, the path to a low-carbon future is pretty well marked out: carbon pricing, regulation in demand inelastic sectors, energy conservation, and massive deployment of existing low-carbon technology.

Earth Flotilla

Oleh Ilnyckyj

The 1997 and 1998 LIFEboat Flotillas were exceptional undertakings that I was privileged to participate in. Organized by Leadership Initiative for Earth, each centred around a week-long sailing experience in the Gulf Islands of British Columbia, intended to help make young people more aware of environmental issues and better connected with those similarly interested.

In March of this year, a smaller but similar expedition is taking place, organized by the World Wildlife Fund, in cooperation with some of the people involved in the original flotillas. Applicants must be residents of British Columbia between 13 and 17. They must be interested in environmental issues and willing to put in the time required.

As someone lucky enough to do something similar in the past, I recommend the opportunity wholeheartedly. If any readers of this blog match the description – or know people who do – application information is online.

[Update: 11 February 2008] I am pleased to report that Tristan’s brother will be participating in the Earth Flotilla, and because his family found out about it from this site, no less.

Radiation types and units

Types of radiation

Radiation is categorized in several different ways. One is on the basis of energy levels: ionizing radiation is sufficiently energetic that it can cause an atom or molecule to be stripped of an electron, turning it into an ion. This depends on the energy level of the individual particles or waves and has nothing to do with the total number of them. Non-ionizing radiation is simply that which doesn’t have enough energy to liberate an electron.

Another way to classify radiation is in terms of whether it is electromagnetic (consisting of photons) or particle radiation. There are three types of particle radiation: alpha decay, based on the emission of two protons and neutrons bound together in a helium nucleus, beta decay, wherein the particle emitted is an electron, and neutron radiation, where atoms release neutrons. Alpha particles are not generally very dangerous, because they are unable to penetrate much of substance. Even a few centimetres of air can have a strong protective effect. That said, ingestion can still be highly dangerous. The Polonium-210 that killed Alexander Litvinenko is an alpha emitter. Beta particles can usually be shielded from using a few milimetres of lead. Neutron radiation is unusual insofar as it is capable of producing radioactivity in the atoms it encounters. Shielding consists of a large mass of hydrogen rich materials.

Electromagnetic radiation with sufficient energy to be ionizing cosists of x-rays and gamma rays. Both consist of high-energy photons (those with short wavelengths), with gamma rays having shorter wavelengths than x-rays (10^(-12)m rather than 10^(-10)m). Shielding, especially for gamma rays, must be dense and fairly extensive.

Measuring radiation

Radiation is also measured in a variety of ways: important ones being Roentgens, rads, rems (Roentgen equivalent in man), Curies, Becquerels, and Sieverts.

Becquerels are a unit of radioactive decay based only on the number of decays per second. A Curie is equal to 3.7 x 10^10 Becquerels, and is approximately equivalent to the activity of 1 gram of Radium isotope. These units reflect the number of emissions only – not their physical or biological effects.

A Roentgen is a measure of ionizing radiation based on the ratio between charge and unit mass. Rads are a largely obselete unit of radiation dose, equal to 100 ergs of energy being absorbed by one gram of matter. Rems are the product of the number of Roentgens absorbed, multiplied by the biological efficiency of the radiation. Rems are also considered highly dated as a measure of radiation. 450 rems is an approximate lethal dose (LD50), for those who do not receive prompt treatment.

Sieverts are the recommended replacemend, “found by multiplying the absorbed dose, in grays, by a dimensionless “quality factor” Q, dependent upon radiation type, and by another dimensionless factor N, dependent on all other pertinent factors.” The LD50 for ionizing radiation is about 5 grays or about 3-5 Sieverts. If the biological efficiency used to calculate rems equals one, one Sievert is 100 rems.

Water and nuclear power

Bus tire

Once the heat generated by nuclear fission has finished spinning the turbines in nuclear power plants, it must somehow be dissipated into the wider environment. Almost invariably, this is done using large amounts of water drawn from nearby rivers and lakes. Now, for plants located in drought-struck regions such as the southeast United States, possible water scarcity threatens to shut down plants, forcing the costly purchase of energy from other jurisdictions.

The Associated Press estimates that 24 of America’s 104 nuclear reactors are located in areas currently experiencing severe drought. On reactor outside Raleigh, North Carolina will need to be shut down if water levels in the lake fall by another 3 1/2 feet. In total, nuclear power provides about 10% of the American supply of electricity. All but two American nuclear plants are cooled using water from lakes and rivers. Some plants evaporate large amounts of water from cooling towers, while others are designed to return the warmed water to the body that originally provided it. Immersing collection pipes at lower levels risks being costly, as well as increasing problems from sediment intake into the cooling system.

All this demonstrates the degree to which many forms of low-carbon energy generation are themselves vulnerable to climate change. Concern about water being a limiting factor in energy production is already acute in Australia. Dams face risks from both drought and the loss of snowpack in mountain ranges (leading to too much water at some times of year and not enough at others). Even wind turbines may be vulnerable to changes in dominant patterns of air circulation. Designing future infrastructure with possible climate changes in mind is essential, if we are not to find ourselves with a lot of expensive hardware rendered useless by changed conditions.

You must do the heaviest / So many shall do none

Conch shell and plants

When it comes to reducing personal environmental impact in any sphere (pollution, climate change, resource depletion, etc), there comes a point where each individual says: “That is too great a sacrifice.” Some people would refuse to give up incandescent bulbs; some, eating meat; some, driving their cars; some, flying in jets. The question arises of what to do when there is a fundamental conflict between an ethical requirement and a person’s will. In the modern world, this applies perhaps most harshly to air travel.

We know that very substantial emissions are associated with flying. We also know that substantial emissions will definitely cause human suffering and death in the future. One flight emits significantly more than a single person can sustainably emit in a year. Every year emissions are above sustainable levels, the concentration of greenhouse gases rises; each year in which that happens, the mean energy absorbed by the planet increases. At some point in the future, it is inevitable that this process would cause massive harm to human beings and non-human living things. It is also plausible that positive feedbacks could create abrupt or runaway climate change, either of which could cause human extinction or the end of humanity as a species with civilization. In the face of that, it is difficult to say that flying isn’t morally wrong.

At the same time, it is impossible for most people to say it is. Partly, this is because of a failure of imagination. They cannot imagine a world where people don’t fly. Mostly, though, it is reflective of the powerful kind of denial that lets people continue to live as they do, even when convincing evidence of the wrongness of their behaviour is revealed. Rationalizations are myriad: (a) Why should I stop when others will just continue? (b) There has to be a balance between acting ethically and getting what I want. Neither of these has any ethical strength in the face of a known and significant wrong. At the same time, it is implausible that people will abandon their self-deception or that external forces will constrain their behaviour effectively. If that is true, our future really isn’t in our hands. We are slaves to fate, in terms of what technological innovation might bring and in terms of how sensitive the climate really is to greenhouse gasses.