Tell me what thy lordly name is on the night’s Plutonian shore

For an interesting example of the connections between science and public policy, look at the recent efforts to re-categorize the planets, a word that derives from the Ancient Greek term for ‘wanderer.’ The International Astronomical Union met in Prague recently to try and do so. Since 1919, they have been the scientific body charged with astronomical naming. At least two competing options were advanced: one was basically to grant planet status to any object in the solar system that has sufficient gravity to have become spherical. This would expand the ranks of planets to twelve, adding Charon – the moon of Pluto – Ceres – a large asteroid – and a distant object called 2003 UB313. (Mythology appreciators may recall that Ceres is the mother of Persephone, who Pluto kidnapped to the underworld and made his queen.)

The alternative, based on different considerations, strips Pluto of its status as a planet. Along with the other objects listed above, barring Charon which is to remain a moon, it will join the ambiguous category of ‘dwarf planet.’ Unsurprisingly, the director of a NASA robotic mission of Pluto is irked by the change. Naturally, funding and attention find themselves tied to terms and definitions that are often arbitrary. Note the scramble to brand all manner of research ‘nanotechnology’ in hopes of capturing the interest, and cash, that is attaching itself to that branch of science. The connection between attention paid to scientific developments and arbitrary phenomenon seems especially important in terms of the way in which the general public is exposed to scientific developments. Remember all the media flurry about the race of ‘hobbit’ proto-humans (Homo floresiensis)? How much less attention would there have been if a certain series of films hadn’t been recently made? Consider also the increased attention paid to climate change in the United States after Hurricane Katrina: an event that it is essentially impossible to definitively attribute to changes in the composition of the atmosphere and attendant climatic shifts.

For all the kerfuffle, there is obviously nothing about the solar system itself that has changed. Why, then, do people care so much? Partly, I suspect it has to do with simple familiarity. Just as it famously discomfited Einstein to be presented with the possibility that the universe is governed by chance at small scales, the idea that the millions of wall-charts in science classrooms everywhere depicting the nine planet solar system are, in some sense, ‘wrong’ may upset others. The solar system, as portrayed in everything from Scientific American to the Magic School Bus series, was a familiar model. That is not, in and of itself, a reason for preserving it. At the same time, I fail to see why this change is being granted such attention.

One other explanation that comes to mind has to do with the way in which many people relate to science: as a set of particular facts in which they have been educated and which they are to remember. All the discussion of having to change mnemonic devices by which the names and sequence of planets are remembered relates to that. Such a stripped-down conception of science doesn’t leave people with much scope for critical inquiry – though such an activity may not be of interest. It is troublesome, I suppose, in an age when it is increasingly vital to have a grasp of scientific ideas and developments in order to be an effective participant in a democratic society. The category into which we file one particular lump of rock orbiting the sun every 250 years doesn’t have such importance.

The way people have been anthropomorphizing the issue strikes me as really odd. People stepping up to ‘defend’ Pluto from cruel astronomers who are ‘demoting’ it suggests that there is some emotional motivation behind the classification. Of course, there is no reason why it is ‘better’ or ‘worse’ for a lump of rock to be one thing or another, in and of itself. It may change of behaviour towards the object in question – think of discussions about whether humans are ‘animals’ or not – but it is quite nonsensical to think of the lump itself having a preference. Owen Gingerich, the head of the committee that came up with the new definition, had a much more comprehensible comment: “We are an expensive science, and if we don’t have public support, we are not going to be able to do our work.” Ah, the politics of science.

Something New Under the Sun

Flowers in a window, London

Happy Birthday Zandara Kennedy

Extensively footnoted and balanced in its claims, John McNeill’s Something New Under the Sun is an engaging and worthwhile study of the environmental history of the twentieth century. It covers atmospheric, hydrospheric, and biospheric concerns – focusing on those human actions and technologies that have had the greatest impact on the world, particularly in terms of those parts of the world human beings rely upon. People concerned with the dynamic that exists between human beings and the natural world would do well to read this volume. As McNeill demonstrates with ample figures and examples, that impact has been dramatic, though not confined to the twentieth century. What has changed most is the rate of change, in almost all environmentally relevant areas.

The drama of some documented changes is incredible. McNeill describes the accidental near-elimination of the American chestnut, the phenomenal global success of rabbits, and the intentional elimination of 99.8% of the world’s blue whales in clear and well-attributed sections. From global atmospheric lead concentrations to the depletion of the Ogallala Aquifer, he also covers a number of huge changes that are not directly biological. I found his discussion of the human modification of the planet’s hydrological systems to be the most interesting, quite probably because it was the least familiar thing he discussed.

Also interesting to note is that, published in 2000, this book utterly dismisses nuclear power as a failed technology. In less than three pages it is cast aside as economically non-sensical (forever dependent on subsidies), inherently hazardous, and without compensating merit. Interesting how quickly things can change. The book looks far more to the past than to the future, making fewer bold predictions about the future consequences of human activity than many volumes of this sort do.

Maybe the greatest lesson of this book is that the old dichotomy between the ‘human’ and the ‘natural’ world is increasingly nonsensical. The construction of the Aswan High Dam has fundamentally altered the chemistry of the Mediterranean at the same time as new crops have altered insect population dynamics worldwide and human health initiatives have changed the biological tableau for bacteria and viruses. To see the human world as riding on top of the natural world, and able to extract some set ‘sustainable’ amount from it, may therefore be unjustified. One world, indeed.

Perseid shower peaks tonight

Lost Lagoon, Vancouver

Taken during a walk with Astrid in late April 2005, this photo shows Lost Lagoon in Vancouver’s Stanley Park. Nearby, to the southeast, is Vancouver’s central urban district. Equally close, to the north and through the park, is the southern end of the Lions Gate Bridge to North Vancouver.

In an announcement particularly relevant to those who live outside of big cities, the Perseid meteor shower will reach its peak of intensity tonight. Generated from dust and fragments from comet Swift-Turtle, the Perseid shower occurs annually. The comet in question was discovered in 1862 and is notable for being the largest object that regularly approaches the earth.

The best time to see the shower is in the hours immediately before dawn, but there should be more than eighty meteors per hour visible to the naked eye for most of the night, for those in reasonably dark places. Because of the way in which the planet rotates, the rate at which the meteors appear is about twice as high right before dawn as it is shortly after sunset. This is because, at that time, the particular part of the planet’s surface where you are is both hidden from the sun and facing in the direction of its the planet’s around the sun. Because of that combination, the most visible collisions with material from the comet will occur.

The shower is called the Perseids because the meteors appear to be coming from the constellation Perseus. Those who are going out to watch may find it worthwhile to familiarize themselves with how the constellation looks and where in the sky it appears.

If anyone has a particularly dramatic experience, I would be glad to hear about it here. I continue to look up with dismay at the thick rain clouds over Oxford.

[Update: 13 August 2006] On account of the constant presence of rain clouds blocking the sky and reflecting back city light, I saw not a single meteor. I hope others did better.

A $500 bet

Let it be noted that the following bet has been placed, for a value of 500 Canadian dollars, at their present value:

I say that in August of 2036, the per-watt price of electricity consumed by the average Canadian consumer will be lower in real terms (accounting for inflation) than it is today. My friend Tristan Laing thinks the cost will be the same or higher. The price in question will be that quoted on the average Canadian’s electricity bill.

He has posted the same declaration on his blog.

[Update: 12 August 2006] I agree with a commenter that the cost per kilowatt-hour will be the easiest metric according to which this wager can be settled. To give a very approximate contemporary value, the cost to consumers for each kilowatt-hour of electricity used in Ontario today is about 5.8 cents. I will come up with a Canadian average soon.

Orbital booster idea

I had an idea several years ago that I think is worth writing up. It is for a system to lift any kind of cargo from a low orbit around a planet into a higher one, with no expenditure of fuel.

Design

The system consists of two carriers: one shaped like a cylinder with a hole bored through it and the other shaped like a cigar. The cigar must be able to pass straight through the hole in the cylinder. The two must have the same mass, after being loaded with whatever cargo is to be carried. This could be achieved by making the cylinder fairly thin, by making the cigar longer than the cylinder, or by having the latter denser than the former. Within the cavity of the cylinder are a series of electromagnets. Likewise, under the skin of the cigar. Around the cylinder is an array of photovoltaic panels. Likewise, on the skin of the cigar. Each contains a system for storing electrical energy.

In addition to these main systems, each unit would require celestial navigation capability: the ability to determine its position in space using the observation of the starfield around it, as modern nuclear warheads do. This would allow it to act independently of ground-based tracking or the use of navigation satellites. It would also require small thrusters with fuel to be used for minor orbital course corrections.

Function

The two objects start off in low circular or elliptical orbits, along the same trajectory but in opposite directions. Imagine the cylinder transcribing a path due north from the equator to the north pole and onwards around the planet, while the cigar transcribes the same path except in the opposite direction: heading southwards after it crosses the north pole. The two objects will thus intersect each time they complete a half-orbit.

As each vehicle circles the planet, it gathers electrical power from solar radiation using the attached photovoltaic panels. When the two orbits intersect, the electromagnets in the cigar and the cylinder are used so as to repel one another and increase the velocity of each projective, in opposite directions, by taking advantage of Newton’s third law of motion. Think of it being like a magnetically levitated train with a bit of track that gets pushed in the opposite direction, flies around the planet, and meets up with the train again. I warn you not to mock not the diagram of the craft! Graphic design is not my area of expertise. Obviously, it is not to scale.

The orbits

Diagram of successive orbits - By Mark Cummins

The diagram above demonstrates the path that one of the craft would take (see the second update below for more explanation). The dotted circle indicates where the two craft will meet for the first time, following the initial impulse. At that point, you could either project up to a higher elliptical orbit or circularize the orbit at that point. This process can be repeated over and over. Here is a version showing both craft, one in red and the other in brown. See also, this diagram of the Hohmann transfer orbit for the sake of comparison. The Hohmann transfer orbit is a method of raising a payload into a higher orbit using conventional thrusters.

The basic principle according to which these higher orbits are being achieved is akin to one being a bullet and the other being the gun. Because they have equal mass, the recoil would cause the same acceleration on the gun as it did on the bullet; they would start moving apart at equal velocity, in opposite directions. Because they can pass through one another, the ‘gun’ can be fired over and over. Because the power to do so comes from the sun, this can happen theoretically take place an infinite number of times, with a higher orbit generated after each.

Because each orbit is longer, the craft would intersect less and less frequently. This would be partially offset by the opportunity to collect more energy over the course of each orbit, for use during the boosting phase.

As such, orbit by orbit, the pair could climb farther and farther out of any gravity well in which it found itself: whether that of a planet, asteroid, or a star. Because the electromagnets could also be used in reverse, to slow the two projectiles equally, it could also ‘climb down’ into a lower orbit.

Applications

On planets like Earth, with thick atmospheres, such a system could only be used to lift payloads from low orbits achieved by other means to higher orbits. The benefit of that could be non-trivial, given that a low orbit takes place at about 700km and a geostationary orbit as used for communication and navigation satellites is at 35,790 km. Raising any mass to such an altitude requires formidable energy, despite the extent to which Earth’s gravity well becomes (exponentially) less powerful as the distance from the observer to the planet increases.

A system of such carriers could be used to shift materials from low to high orbit. The application here is especially exciting in airless or relatively airless environments. Ores mined from somewhere like the moon or an asteroid could be elevated in this way from a low starting point; with no atmosphere to get in the way, an orbit could be maintained at quite a low altitude above the surface.

Given a very long time period, such a device could even climb up through the gravity well that surrounds a star.

Problems

The first problem is one of accuracy. Making sure the two components would intersect with each orbit could be challenging. The magnets would have to be quite precisely aligned, and any small errors would need to be fixed so the craft would intersect properly. Because of sheer momentum, it would be an easier task with more massive craft. More massive vehicles would also take longer to rise in the gravity well through successive orbits, but would still require no fuel do so, beyond a minimal amount for correctional thrusters, which could be part of the payload.

Another problem could be that of time. I have done no calculations on how long it would take for such a device to climb from a low orbit to a high one. For raw ores, that might not matter very much. For satellite launches, it might matter rather more.

Can anyone see other problems?

[Update: 7:26pm] Based on my extremely limited knowledge of astrophysics, it seems possible the successive orbits might look like this. Is that correct? My friend Mark theorizes that it would look like this.

[Update: 11 August 2006] Many thanks to Mark Cummins for creating the orbital diagram I have added above. We are pretty confident that this one is correct. He describes it thus: “your first impulse sends you from the first circle into an elliptical orbit. When your two modules next meet, (half way round the ellipse), you can circularize your orbit and insert into the dotted circle, or you can keep “climbing”, an insert into a larger ellipse. Repeat ad infinitum until you are at the desired altitude, then circularize.”

Power conservation through geothermal temperature regulation

For those concerned about climate change or dependency on foreign energy, a home geothermal heating and cooling system may be just the ticket. Such systems take advantage of how the temperature is relatively constant underground, whether it is overly hot at the surface or overly cold. As such, it can be used to heat in the winter and cool in the summer, while using only a minimal amount of energy to carry out the heat exchange. While this is a pretty expensive thing to install in a single existing house after the fact, it seems plausible that it could be scaled in ways that make it economically viable in a good number of environments.

If electricity, oil, and gas really started to get expensive, you would start seeing a lot more such systems. Another example is the pipelines that draw cold water from the bottom of Lake Superior to cool office towers in Toronto during the summer.

Conservation may not be as technologically engrossing as genetically modified biofuels and hydrogen fuel cells, but it is definitely a proven approach.

Climate change and nuclear power

Locks on a gate

Among environmentalists these days, the mark that you are a hard-headed realist committed to stopping climate change is that you have come to support nuclear power. (See Patrick Moore, one founder of Greenpeace, in the Washington Post.) While appealing in principle, the argument goes, renewable sources of energy just can’t generate the oomph we need as an advanced industrial society – at least, not quickly enough to get us out of the hole we’ve been digging ourselves into through fossil fuel dependence.

I am sympathetic to the argument. A good case can be made for employing considerable caution when dealing with something as essential and imperfectly understood as the Earth’s climatic system. Nuclear power is strategically appealing – it could reduce the levels of geopolitical influence of some really nasty governments like Iran, Saudi Arabia, and Russia. It is appealing insofar as carbon emissions are concerned, though it is not quite as zero-emission as some zealots claim, once you take into account things like fuel mining and refining, transport, and construction. It is appealing insofar as it can generate really huge amounts of power, provided we can find people who are willing to have reactors in their vicinities.

The big problem, obviously, is nuclear waste. Nuclear reactors produce high level radioactive waste, as well as becoming radioactive themselves over the course of time. The scales across which such waste is dangerous dwarf recorded human history. Wastes like Plutonium-239 will remain extremely dangerous for tens of millennia. As The Economist effectively explains it:

In Britain only a few ancient henges and barrows have endured for anything like the amount of time that a nuclear waste dump will be expected to last—Stonehenge, the most famous, is “only” 4,300 years old. How best, for example, to convey the concept of dangerous radiation to people who may be exploring the site ten thousand years from now? By that time English (or any other modern language) could be as dead as Parthian or Linear A, and the British government as dim a memory as the pharaohs are today.

In fairness, we have some reason to believe that future generations will be more capable of dealing with high level radioactive waste than we are. There is likewise some reason to believe that we can bury the stuff such that it will never trouble us again. Much of it has, after all, been dumped in far less secure conditions. Chernobyl remains entombed in a block of degrading concrete, and substantial portions of the Soviet nuclear fleet have sank or been scuttled with nuclear waste aboard. (See: One, two, three) Off the coast of the Kola Peninsula near Norway, 135 nuclear reactors from 71 decommissioned Soviet submarines were scuttled in the Berrents Sea during the Cold War. In addition, the Soviet Union dumped nuclear waste at 10 sites in the Sea of Japan between 1966 and 1991.

In the end, I don’t find the argument for long-term geological storage to be adequate. We cannot make vessels that will endure the period across which these materials will be dangerous. As such, I do not think we can live up to our obligations towards members of future generations if we continue to generate such wastes – though that is unlikely to matter much to politicians facing US$100 a barrel oil. Pressed to do so, I am confident that a combination of reduction in the usage of energy and the development of renewable sources could deal with the twin problems of climate change and the depletion of oil resources. The short term cost might be a lot higher than that associated with nuclear energy, but it seems the more prudent course to take.

All that said, I very much encourage someone to argue the contrary position.

Tuna farming

The bitter joke among fisheries scientists is that the Japanese are engaged in a dual project of turning all available knowledge and energy to the farm-rearing of bluefin tuna while simultaneously expending all available effort to catch every wild example.

This month, they succeeded in one of those aims: Hidemi Kumai and his team at Kinki University managed to raise fry born in captivity to adult size and them have them breed successfully. Because of the complexity of their life cycle, it is a considerable achievement. (Source) These are valuable fish, with the record holder having sold for $180,000 in Tokyo. The three largest fishers of Bluefin tuna are the United States, Canada and Japan.

This is good news for those who enjoy bluefin tuna sashimi, though they should probably be hoping that the rearing process can be scaled up to commercial levels. According to the US National Academy of Sciences1, present day stocks are only 20% of what existed in 1975. Some sources hold existing bluefin stocks to be just 3% of their 1960 level. Present stocks are only 12% of what the International Commission for the Conservation of Atlantic Tunas has designated as necessary to maintain the maximum sustainable yield for the resource. Within another fifty years, it is quite possible that wild bluefin tuna will no longer exist.

[1] National Academy of Sciences. National Research Council. An Assessment of Atlantic Bluefin Tuna. Washington DC National Academy Press, 1994.

On audio compression

In the last few days, I have been reading and thinking a lot about audio compression.

Lossy v. lossless compression

As most of you will know, there are two major types of compression: lossless and lossy. In the first case, we take a string of digital information and reduce the amount of space it takes to store without actually destroying any information at all. For example, we could take a string like:

1-2-1-7-3-5-5-5-5-5-5-5-5-5-5-5-5-5-2-2-2-3-4

And convert it into:

1-2-1-7-3-5(13)-2(3)-3-4

Depending on the character of the data and the kinds of rules we use to compress it, this will result in a greater or lesser amount of compression. The upshot is that we can always return the data to its original state. If the file in question is an executable (a computer program), this is obviously required. A file that closely resembles Doom, as a string of bits, will nonetheless probably not run like Doom (or at all).

Lossless compression is great. It allows us, for instance, to go back to the original data and then manipulate it with as much freedom as we had to begin with. The cost associated with that flexibility is that files compressed in lossless compression are larger than those treated with lossy compression. For data that is exposed to human senses (especially photos, music, and video), it is generally worthwhile to employ ‘lossy’ compression. A compact disc stores somewhere in the realm of 700MB of data. Uncompressed, that would take up an equivalent amount of space on an iPod or computer hard drive. There is almost certainly some level of lossy compression at which it would be impossible for a human being with good ears and the best audio equipment to tell if they were hearing the compressed or uncompressed version. This is especially true when the data source is CDs, which have considerable limitations of their own when it comes to storing audio information.

Lossy compression, therefore, discards bits of the information that are less noticeable in order to save space. Two bits of sky that are almost-but-not-quite the same colour of blue in an uncompressed image file might become actually the same colour of blue in a compressed image file. This happens to a greater and greater degree as the level of compression increases. As with music, there is some point where it is basically impossible to distinguish the original uncompressed data from a compressed file of high quality. With music, it might be that a tenth of a second of near silence followed by a tenth of a second of the slightest noise becomes a twentieth of a second of near silence.

MP3 and AAC are both very common kinds of music compression. Each can be done at different bit-rates, which determines how much data is used to represent a certain length of time. Higher bit rates contain more data (which one may or may not be able to hear), while lower bit rates contain less. The iTunes standard is to use 128-bit AAC. I have seen experts do everything from utterly condemn this as far too low to claim that at this level the sound is ‘transparent:’ meaning that it is impossible to tell that it was compressed.

But what sort to use, exactly?

Websites on which form of compression to use generally take the form of: “I have made twenty five different versions of the same three songs. I then listened to each using my superior audio equipment and finely tuned ear and have decided that X is the best sort of compression. Anyone who thinks you should use something more compressed than X obviously doesn’t have my fine ability to discern detail. Anyone who wants you to use more than X is an audiophile snob who is more concerned about equipment than music.”

This is not a very useful kind of judgment. Most problematically, the subject/experimenter knows which track is which, when listening to them. It has been well established that taking an audio expert and telling them that they are listening to a $50,000 audiophile quality stereo will lead to a good review of the sound, even if they are really listening to a $2,000 system. (There are famous pranks where people have put a $100 portable CD player inside the case for absurdly expensive audio gear and passed the former off as the latter to experts.) The trouble is both that those being asked to make the judgement feel pressured to demonstrate their expertise and that people actually do perceive things which they expect to be superior as actually being so.

Notoriously, people who are given Coke and Pepsi to taste are more likely to express a preference for the latter if they do not know which is which, but for the former when they do. Their pre-existing expectations affect the way they taste the drinks.

What is really necessary is a double-blinded study. We would make a large number of versions of a collection of tracks with different musical qualities. The files would then be assigned randomized names by a group that will not communicate with either the experimenters or the subjects. The subjects will then listen to two different versions of the same track and choose which they prefer. Each of these trials would produce what statisticians call a dyad. Once we have hundreds of dyads through which to compare versions, we can start to generate statistically valid conclusions about whether the two tracks can be distinguished, and which one is perceived as better. On the basis of hundreds of such tests, in differing orders, we would gain knowledge about whether a certain track is preferred on average to another.

We would then analyze those frequencies to determine whether the difference between one track (say, 128-bit AAC) and another (say, 192-bit AAC) is statistically significant. I would posit that we will eventually find a point where people are likely to pick one or the other at random, because they are essentially the same (640-bit AAC v. 1024-bit AAC, for instance). We therefore take the quality setting that is lowest, but still distinguishable from the one below based on, say, a 95% confidence level and use that to encode our music.

This methodology isn’t perfect, but it would be dramatically more rigorous than the expertly-driven approach described above.