Digital photo frames have stupid aspect ratios

The aspect ratio of an image of photograph is the ratio of the length of one side to the length of the other. For instance, 35mm film, 4×6″ prints, and full-frame digital sensors all have proportions of 3:2. Most APS-C sensors, used in cheaper digital dSLRs are also around 3:2. Images from my Rebel XS are 3888 × 2592 pixels, which is a 3:2 aspect ratio.

Standard definition televisions and many point and shoot digital cameras use an aspect ratio that is closer to square: 4:3. 4:3 is also used for Four Thirds system cameras and 645 medium format cameras. For instance, my old Canon A570 IS produces images that are 3072 x 2304 pixels, which is a 4:3 aspect ratio.

What vexes and perplexes me is the fondness digital picture frame manufacturers have for making wide-screen devices. They have ratios like 16:9 and 15:9, which means that images from virtually any sort of commonly-used film or digital camera will appear with relatively thick bands of black screen space on either side. This is akin to watching a VHS tape or standard television broadcase on a wide-screen high-definition television. Given how much digiframe manufacturers charge for screen space (a good 10″ frame costs around $300, whereas 19″ LCD monitors can be had for around $150), it seems foolish for them to throw away so much of it. Why spend $300 on Sony’s DPF-V1000 frame knowing that a good fraction of the screen space will be wasted with every photo you ever display?

A frame with a 3:2 aspect ratio would show images from film and higher grade digicams perfectly, and images from cheaper digicams with minor bars. Why this is not the standard for digital photo frames therefore bewilders me. It might have something to do with being able to brand them ‘high definition.’ Of course, you can have a 3:2 aspect ratio frame with any level of definition you want: it could be three billion by two billion pixels!

Aside on ‘megapixels’

It is also worth noting how the number of pixels along the long edge of an image gives a better idea of comparative resolution than megapixel count. After all, it follows that the size of each pixel will shrink by half, every time you cram twice as many of them along either edge.

Looking at the pixels, it is easy to see that the A570 has 79% of the resolution of the Rebel XS. By contrast, reading that the Rebel has a 10.1 megapixel sensor and the A570 has a 7.1 megapixel sensor might lead to a customer being mistaken about how much more image quality they are getting. The difference gets even more significant with higher end cameras. A consumer might naively think that a 21.1 megapixel 5D Mark II has three times the resolution of my cheap A570IS. In fact, it produces photos that are 5616 x 3744 pixels. The sensor in the A570 puts out 55% as many.

Admittedly, there are many properties of the sensor that are at least as important as resolution, such as noise level at high ISO settings. That is why I argue that – above 6 megapixels or so – resolution ceases to be an important issue in comparing cameras. Factors like noise and dynamic range are much more important

Valve games for Mac

One significant downside of being a Mac user is gaming. The saddest part of the Apple store is definitely the thin shelf of largely-old, largely-mediocre, heavily kid-focused games. As such, it is a welcome development that Valve is bringing the Portal and Half-Life series to Macs.

As good as Halo and Warcraft III are, it will be nice to have some more variety. This may also be a signal that increasing market penetration is leading to game companies getting more serious about Apple.

(Oh, and I am aware that I could install a Microsoft OS on a partition. I just don’t think it is worth the expense and bother.)

The Lindzen Fallacy

The Lindzen Fallacy is a sub-genre of the fallacy of petitio principii (begging the question) that I have named after MIT Meteorology Professor and climate change delayer Richard Lindzen. I define it as such:

The assumption that fears about catastrophic or runaway climate change are overblown, based on the assumption that climate change can never truly imperil humanity.

Many people have a deep, intuitive sense that the world wil remain as it is. In particular, that it will continue to provide the basic physical requirements of humanity, such as breathable air, acceptable temperatures, and conditions suitable for continued agriculture.

This perspective is clearly a bit of circular logic: climate change cannot be dangerous, because if it were truly dangerous, it would be dangerous. (Repeat as often as you like.)

Negative feedbacks

Lindzen has told the US Coast Guard Academy that: “Extreme weather events are always present. There’s no evidence it’s getting better, or worse, or changing.” He has suggested that there simply must be negative feedbacks that counter the warming effects of greenhouse gases, possibly through the increased radiation of heat into space, caused by columns of tropical cumulus convection carrying large amounts of heat high into the atmosphere. Satellite data from NASA’s Clouds and the Earth’s Radiant Energy System (CERES) mission raises serious doubts about this being a negative climate feedback. His perspective on climate sensitivity appears dubious both in relation to climate models and the paleoclimatic record. Lindzen also argued to the Vice President’s Climate Task Force, in the US under the Bush Administration, that action should not be taken to mitigate climate change. Climatologis James Hansen speculates that: “Lindzen’s perspective on climate sensitivity… stems from an idea of a theological or philosophical perspective that he doggedly adheres to. Lindzen is convinced that nature will find ways to cool itself, that negative feedbacks will diminish the effect of climate forcings.” Back in 1999, Hansen responded to Lindzen’s hypotheses about negative feedbacks by encouraging the scientific community to investigate two things: a) whether water vapour feedbacks can be observed, and b) whether the ocean heat content is increasing in line with the model predictions. In the view of climatologist Gavin Schmidt, subsequent evidence has been supportive of the Hansen view and has drawn into question the Lindzen perspective.

Just showing that negative feedbacks exist is not enough to prove that climate change is dangerous, or that we should do nothing about it. As I argued in a discussion with a different climate denier:

What specific mechanism counteracts the infrared absorbing effect of greenhouse gasses? If such an effect exists, why has it automatically been getting stronger as concentrations rise? Also, what proof is there that even if there were such an effect, it would protect us from any amount of increased GHG concentrations. For instance, continued business-as-usual emissions could push concentrations to over 1000 ppm of CO2 equivalent by 2100, compared to 280 ppm before the Industrial Revolution and about 383 ppm now. Even if there were negative feedback effects that significantly reduced the total forcing resulting from increased GHG concentrations (that is, lowered climatic sensitivity), it is possible that they would break down when presented with such a significant change.

It is not enough to show that there are one or more negative feedbacks in the climate system. It is necessary to show that they will be sufficient in magnitude and durability to counter the warming caused by anthropogenic greenhouse gases. The fact that concentrations of those and temperatures are still rising suggest that this is not the case in today’s climate, and the existence of massive potential positive feedbacks (Arctic sea ice albedo, permafrost methane, etc) make it dubious for future climates.

Further to that, the point I am raising here is not about the technical means by which Lindzen or anyone else thinks the climate will automatically rebalance in response to changes caused by humanity. Rather, it is to highlight the faulty assumption that such rebalancing can be taken for granted, regardless of the specific means by which it might occur.

The Lindzen Fallacy is dangerous because it offers us false comfort. If mainstream climate science is correct, and a business-as-usual course will produce far more than 2°C of warming by the end of the century, future generations will think back with regret about all those in our time (and before) who falsely believed that the world could never become inhospitable to humans.

A related bit of faulty thinking

The Lindzen fallacy relates to another flawed and potentially dangerous perspective: namely, that humanity is so adaptable that, no matter how much climate changes, humanity will be able to adapt. While it is hard to see how humanity could survive runaway climate change, it is easy to see why someone would think the empirical evidence supports this view. After all, nothing has wiped us out yet. Unfortunately, this logic suffers from the same fault as that of a chicken famously described by Bertrand Russell in The Problems of Philosophy:

And this kind of association is not confined to men; in animals also it is very strong. A horse which has been often driven along a certain road resists the attempt to drive him in a different direction. Domestic animals expect food when they see the person who feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken.

In short, inductive reasoning is dangerous, whenever there is a chance of something truly unprecedented taking place.

There are good scientific reasons to believe that climate change could be just such a dangerous, unprecedented phenomenon in relation to human beings.

How not to use feed-in tariffs

As I mentioned when expressing doubt about Bloom Boxes, many environmentalists assume that distributed generation of electricity is inherently preferable to large-scale generation and transmission. As I have argued in the past, there are good reasons to argue the converse. Micro wind turbines are especially dubious, given that the energy output from turbines increases with the diameter of the blades. Those little rooftop turbines some people install just don’t make sense, unless they live in very remote and windy areas. In a place as northern and cloudy as Britain, home solar photovoltaic arrays may make even less sense, especially if investments in more cost-effective options like improving efficiency of energy use have not yet been made. Saving many kilowatt-hours a day through better insulation beats producing a trickle of electricity, especially given that it is less costly.

In a recent essay, George Monbiot argues that feed-in tariffs for small scale renewables are regressive and a waste of money:

[The government] expects this scheme to save 7m tonnes of carbon dioxide by 2020(5). Assuming, generously, that the rate of installation keeps accelerating, this suggests a saving of around 20m tonnes of CO2 by 2030. The estimated price by then is £8.6bn. This means it’ll cost around £430 to save one tonne of carbon dioxide.

Indeed, if the government is going to provide feed-in tariffs for renewable projects, they must be the sort that can actually make a difference: multi-megawatt run-of-river hydro projects, concentrating solar stations that can put out baseload power, and the like. If the government wants a sound climate policy for homes, it should be tightening building standards, encouraging retrofits, and the like.

The real story on glaciers

There has been a huge amount of talk about the claim in the IPCC’s most recent report that Himalayan glaciers would be gone by 2035. That figure is wrong, and came from a dubious source. That said, the state of the world’s glaciers is not encouraging. Germans are putting a reflective cover on their last glacier, to slow down its melting. The global glacier index shows a clear trend of decline. This graph shows the data on all glaciers, 30 reference glaciers of special importance, and a subset of North American glaciers. Not only is the decline clear, but it is clearly accelerating.

Perhaps the biggest news is from Greenland, as described in Alun Anderson’s excellent After the Ice:

If you take into account the rapid collapse of the glaciers, how much water is Greenland adding to the world’s oceans? In 2008, [Caltech glaciologist Eric] Rignot teamed up with scientists from around the world and estimated that the ice sheet had been losing 30 gigatons of ice a year from the 1970s through the 1980s, 97 gigatons in 1996, and between 239 and 305 gigatons in 2007… A gigaton is a billion metric tons, or the weight of a cubic kilometer of water. Add the latest annual figure of 305 gigatons to the oceans and the sea level rises by close to a millimeter. Keep going faster for a century on top of the natural thermal expansion of the oceans as they warm and ice melting elsewhere and that is enough for governments around the world to have to add billions to the cost of coastal defences. The acceleration is deeply worrying. Its cause appears to be those rapidly moving glaciers: the paper shows that they account for between 40 and 80 percent of the ice loss.

I called up Eric Rignot in his laboratory and asked if he was surprised too. He laughed. “Even just a couple of years ago, to state that the ice sheet was losing as much mass as it is, would make me considered a wild man. I think if you had told people in 1990 that I would make a prediction in 2008 that we were going to lose three hundred gigatons per year of ice in Greenland, everybody would have laughed. He is not serious, they would have said. There is no way you can get anything like that.” So what will happen next? “We see acceleration. It’s not a linear trend; it’s more rapid than that. I don’t know where it’s going to go. Ten years ago we thought we knew everything. Now we know we don’t.” (p.233 hardcover)

Once this ice is lost, it won’t be coming back. When bright shiny snow gets replaced with dark ground, the Arctic absorbs even more energy from the sun. Furthermore, the shrubs that replace tundra (and the forests that replace them) are progressively more absorptive of sunlight. Partly, this is because the new vegetation extends above the snow.

It is really hard to see how anybody looking at the data can conclude that glaciers provide support for the contention that climate change is not happening, or not likely to be a problem for human beings.

Caltech’s new solar cells

In an announcement that is exciting, if accurate, researchers from Caltech claim they have developed cells that can capture 85 percent of total collectible sunlight using only a fraction of the silicon required for conventional cells. The cells are 98% transparent plastic, with just 2% of their volume comprised of silicon.

For now, they have only built tiny ones. If the approach can be effectively scaled up, it could help cut the capital cost for solar photovoltaic facilities.

Bill Gates on nuclear power

Bill Gates has brushed up against climate issues before. First, he apparently considered investing in the oil sands. Later, he invested $4.5 million of his own money in geoengineering research.

Most recently, he gave a talk at the TED conference advocating that developed countries and China cut greenhouse gas emissions to zero by 2050 (producing an 80% overall reduction), and do so largely on the basis of nuclear power. He thinks fast breeder reactors capable of using U-238 are the way forward, given how much more fuel would be available. His favoured version of breeder reactor is the traveling wave reactor, which is theoretically capable of using little or no enriched uranium.

Emissions equation

Gates argues that the key equation is: (population) X (services) X (energy use for services) X (greenhouse gas intensity of energy). To get down to zero, one of these elements needs to be reduced to that level. He argues that more services are important, especially for the world’s poor. Efficiency, he argues, can be improved quite substantially – perhaps increased three to sixfold, overall. The real work, he argues, needs to be done by cutting the GHG emissions associated with energy production to near zero.

Energy options

Gates argues that the energy systems of the future will need massive scale and high reliability. He singles out five he sees as especially promising, though with significant challenges:

  • Carbon capture and storage (CCS) – hampered by cost, access to suitable sites for injection, and long-term stability of stored gases (the toughest part)
  • Nuclear – with its cost, safety, proliferation, and waste issues
  • Wind
  • Solar photovoltaic
  • Solar thermal – all three limited by land use, cost, transmission requirements, and the need for energy storage to modulate fluctuations in output

Four others he describes as potentially able to make a contribution but decidedly secondary in importance:

  • Tide
  • Geothermal
  • Biomass
  • Fusion

I agree that fusion is a long shot that we cannot count on. I am more optimistic than Gates about the other three. Pumped tidal power could provide some of the energy storage he sees as so important. Enhanced geothermal looks like it has a lot of promise. Finally, combined with CCS, burning biomass offers us a mechanism to actually draw carbon dioxide out of the atmosphere and bury it.

The big picture

Cutting from the world’s current global emissions of about 26 billion tonnes (gigatonnes) of CO2 down to zero will require enormous activity. Quite possibly, nuclear will need to be part of that, despite its many flaws. That said, we need to be hedging all of our bets. One big accident could put people off nuclear, or fast breeder designs could continue to prove impractical. We need to be deploying options like huge concentrating solar farms in deserts and massive wind installations at the same time.

It is also worth noting that Gates’ assumptions about the rate at which emissions must be reduced are more lenient than those like James Hansen who are more concerned about when massive positive feedbacks will be kicked off. If the people who say we need to stabilize at 350 ppm are correct, Gates’ prescription of a 20% cut by 2020 and an 80% cut by 2050 will be inadequate to prevent catastrophic or runaway climate change.

Gates talks about this a bit during the questions. There are two risks: that his assumptions about the speed with which emissions must be cut are too lenient, or that his beliefs about the pace of technological development and deployment are overly optimistic. He thinks geoengineering could “buy us twenty or thirty years to get our act together.” Here’s hoping we never have to test whether that view is accurate.

Arctic sea ice volume

I expected Alun Anderson’s After the Ice: Life, Death, and Geopolitics in the New Arctic to mostly contain information I had seen elsewhere. In fact, it is chock full of novel and interesting details on everything from marine food webs to international law to oil field development plans. I read the first 200 pages in one sitting.

One chapter goes to some length in describing how we know what we do about Arctic sea ice volume. It is harder to measure than the extent of sea ice, which can be observed in all sorts of ways by satellites (optical instruments, synthetic aperture RADAR, passive microwave emissions, etc). One effort to estimate how ice volume is changing was based on multibeam SONAR on submarines. An 11 day survey conducted by Peter Wadhams, using the nuclear-powered HMS Tireless concluded that 40% of Arctic sea ice has been lost since the 1970s. Another team, led by Drew Rothrock, used previously secret US submarine data to confirm that figure for all areas that submarines have been visiting.

Anderson also describes the importance of the cold halocline layer: a thin layer of cold water that insulates the bottom of Arctic ice from the warmer Atlantic waters underneath. Without this layer, multiyear Arctic ice would be doomed. For a number of reasons, climate change threatens to undermine it. If it does, the complete disappearance of summer sea ice could occur faster than anyone now expects.

There are many reasons to worry about the vanishing Arctic ice, from the increased absorption of solar radiation that accompanies lost albedo to the danger of invasive species entering the Atlantic from the Pacific. I’ve written previously about ‘rotten’ ice, and many other issues in Arctic science.

Those much-hyped ‘Bloom Boxes’

Now that some figures are on their website, it is possible to comment a bit more meaningfully on Bloom Energy (beyond noting that they can attract a lot of heavyweights to their press events).

They seem to have deployed 3 megawatts of fuel cells in seven installations. That’s twice as much power as is provided by Grouse Mountain’s solitary wind turbine. Of these, two installations (with an output of 900 kW) are running on methane from renewable sources. According to Wikipedia, the fuel cells cost $7,000 to $8,000 per kilowatt. That is extremely high. An open cycle gas turbine power plant costs about $398 per kilowatt. Wind turbines cost something like $1,000 per kilowatt. Nuclear is probably over $2,000 and even solar photovoltaic is cheaper than $5,000. From an economic perspective, natural gas also isn’t the most appealing fuel for electricity production. It has significantly higher price volatility than coal.

Without more statistics, it is impossible to know how the efficiency of these fuel cells compares to conventional natural gas power plants, either before or after transmission losses are factored in. Bloom’s literature says that, when they are using conventional natural gas, emissions from their fuel cells are 60% lower than those from a coal power plant. Frankly, that isn’t terribly impressive. Coal plants generate massive amounts of CO2, relative to their power output. It also isn’t clear whether methane from renewable sources would be more efficiently used in these distributed fuel cells than in larger facilities based around turbines and combustion.

Many environmentalists assume that distributed power is the future, but there are definitely advantages to large centralized facilities. They can take advantage of economies of scale and concentrated expertise. They may also find it easier to maintain the temperature differential that establishes carnot efficiency.

It will be interesting to see how Bloom’s products stack up, when more comparative data is available.

Wave Hub

Despite moderate potential, wave power is one form of renewable energy that hasn’t really gotten off the ground yet. One project in Cornwall is helping to change that. Wave Hub will test four different kinds of equipment for converting wave energy into electricity, producing 20 megawatts of power in the process.

The equipment will be about ten miles offshore.

David MacKay estimates that the UK could deploy as much as 1,000km of wave power generators, yielding four kilowatt-hours per day for each person in the UK. That’s small beans beside the 116 kilowatt-hours that people in the UK actually use, but we need to be looking into all available renewable options.