The distance to the horizon

When searching for Napoleon’s fleet in the Mediterranean, Admiral Lord Nelson positioned his own ships in a line, spaced as widely as possible while still allowing them to see one another. As it turns out, there is a fairly basic formula for calculating the distance to the horizon from any particular vantage point. This assumes the planet you are on to be perfectly spherical, but it should provide a decent approximation for slightly-squashed spheres such as the Earth. Obviously, the formula will be thrown off if you are on anything other than perfectly level ground. As such, it is better applied at sea than on land.

Where d = the distance to the horizon
h = the observer’s height above the ground
r= the radius of the planet

d = √ ( h^2 + 2rh )

Nelson’s flagship, the HMS Victory was 62.5m from waterline to the top of the mainmast. As such, someone sitting up there would have been able to see a bit more than 28km.

This formula is useful if you are on a planet with a known diameter and want to know how far away something on the horizon is (the diameter of the Earth is about 6365km). It could also be useful if you are on a planet with an unknown diameter that you want to know. Just drive a stake into the ground and walk away from it until it begins to vanish – then, apply the formula above to solve for the radius. Of course, you will need either sharp eyes or some kind of telescope if the planet you are on has any substantial size.

Incidentally, while they were still operating, the Concordes flew at 17,000m of altitude, allowing people looking out the windows to get a clear view of the curvature of the Earth.

Some security related reading

Shadow outline on buildings

Here are a few interesting and long security related documents which have recently become available. They are all in PDF format:

I will post something more original as soon as possible.

Studio experimentation I

The photography class I was hoping to take at the Ottawa School of Art got canceled, due to lack of enrollments. Probably, that has a fair bit to do with the ongoing transit strike.

Nevermind. I can learn photographic lighting without the benefit of a class. I shot these on my dining room table. I used some tracing paper and my ironing board to set up a crude seamless backdrop (something more opaque would be better). For illumination, I used a hotshoe mounted flash. For light modification, I used a big round reflector: white on one side, soft gold on the other.

Because I have neither a wireless cable release (what a bizarre anachronism that term is!) nor an assistant, my basic approach was to turn the dining room lights on, focus manually, turn the lights out, point and set the flash, push the shutter (on two second delay), then dash into the right spot while holding my reflector.

These are all the original files, straight from the camera with no Photoshop tomfoolery.

Colour temperature and photography

Eye exams on Somerset

One way in which colours are categorized is according to the temperatures at which materials emit them, when heated in a vacuum. The phenomenon of warm things emitting light can be observed readily: for instance, when a bar of iron is heated from red, to orange, to yellow, to white. Some of the key colours photographically are those akin to the light of the sun around noon (about 5500° Kelvin) and the light from incandescent bulbs (about 3300° K). Just as with the heated iron bar, the hotter the light source, the ‘cooler’ the temperature appears: ranging from reds and oranges at low temperatures up to greens and blues. This can be a bit confusing, since the colours artists describe as ‘warm’ are actually produced by low temperatures, and vice versa.

Virtually all digital cameras have the ability to adapt to different colour temperatures. This is important because our eyes generally make the correction automatically. Looking at a scene under fluorescent lights, it doesn’t seem as green to us as it really is – and will appear on film or an uncalibrated digital sensor. Exactly how you set the white balance on your camera varies by model and manufacturer, but it is worth checking the manual over.

In addition to being used to correct for the dominant type of lighting in a scene, colour balance can be set so as to create a desired look that may not have been present in the original scene. For example, intentionally using the white balance for warm light (low temperature) in a scene with cool light (high temperature) exaggerates the cool light in the scene. As a result, you get a very cool looking photo like this one. By contrast, intentionally using the white balance for cool light (high temperature) in a scene that already has fairly warm light will exaggerate the warm light, as with this photo.

For users of Canon cameras, here is an easy way to try this out:

  • First, head out on a cold winter’s day and find a wintery looking scene.
  • Then, go into the white balance setting for your camera. If you have a point and shoot camera, this is normally done by setting the control dial on your camera to ‘P,’ then pressing the ‘Func. Set’ button in the middle of the wheel on the back. Scroll down once and you should be in the white balance menu. Press the ‘right’ button until you have ‘Tungsten’ selected.
  • If you have a digital SLR, there is usually a dedicated white balance button on the back, labelled ‘WB.’
  • Shoot the winter scene with that setting, and you will get a cool blue looking result.
  • Secondly, try shooting a warmish scene (such as one taken around sunset outside) with the camera set to ‘Cloudy.’ That will make it look even warmer, which is sometimes attractive.

When you choose a colour balance setting on your digital camera, you are telling it how to process the raw data from the sensor into a JPG image. Since the raw data isn’t normally retained, this is an irreversible choice (though it is possible to approximate a white balance change using software like Photoshop). For cameras capable of recording the data from the sensor as a RAW file, you will be able to select whatever white balance you like after the photo has been taken. Thanks to CHDK, a great many Canon cameras (including inexpensive point and shoot models) can be given this capability.

Incidentally, the matter of what wavelength of light is emitted by objects of different temperatures is a key part of the physics of climate change. One neat thing about science is the way you often run into aspects of one field that are relevant somewhere very different.

Internet footprints and future scrutiny

Frozen blue lake, Vermont

Both The Economist and Slate have recently featured articles about the increasingly long and broad trails people are leaving behind themselves online: everything from comments in forums to Facebook profiles to uploaded photographs. Almost inevitably, some of this content is not the kind of thing that people will later want to see in the hands of their employers, the media, and so forth. I expect that more savvy employers are already taking a discreet peek online, when evaluating potential hires.

The two big questions both seem to concern how attitudes will evolve, both among internet users in general and among scrutinizers like employers. It’s possible that people thirty years from now will view our open and informal use of the internet as roughly equivalent to the famously uninhibited sex had by hippies in the 1960s: a bit of a remarkable cultural phenomenon, but one long dead due to the dangers inherent. It is also possible that people will come to view the existence of such information online as an inevitability, and not judge people too harshly as a result. Less and less human communication is the ephemeral sort, where all record ceases once a person’s voice has attenuated. As a result, more of what people say and do at all times of their lives (and in all states of mind) is being recorded, often in a rather durable way.

Personally, I suspect that the trend will be towards both greater caution and greater tolerance. Internet users will become more intuitively aware of the footprints they are leaving (especially as more high-profile cases of major embarrassment arise) and employers and the media will inevitably recognize that almost nobody has produced a completely clean sheet for themselves. Of course, there will still be a big difference between appearing in photographs of booze-fueled university parties and appearing at KKK rallies. The likely trend is not that a wider range of activities will be excusable, but rather that more evidence about everything a person has done will be available.

We can also expect the emergence of more private firms that seek to manage online presence, especially after the fact. Whether that means bullying (or bribing) the owners of websites where unwanted content has cropped up, creating positive-looking pages that outrank negative ones, or stripping away elements of databases through whatever means necessary, there will be a market for data sanitation services. While some people are likely to push for revamped privacy laws, I don’t see these are likely to be much help in this situation. When people are basically putting this information out in public voluntarily, it’s not clear how legislation could keep it from being scrutinized by anyone who is interested.

A few related posts:

Location data and photography

Dylan and Dusty

Before long, I expect that many cameras will have built-in GPS receivers and the option to automatically tag every photo with the geographic coordinates of the place where it was taken. That will allow for some neat new kinds of displays: from personal photo maps that show the results of a single person’s travels to composites of the photos a great many people have taken of the same place.

For those who would be interested in such things, but don’t yet have equipment that can locate itself, it seems like there could be a simple workaround. These days, low-cost GPS tracking devices are very affordable. All you need is a camera and a tracker with coordinated clocks. Then, you carry the tracker with you when you take photos. After you upload them to a computer, you can run software to automatically attach location data from the tracking system to the photos. Given the increasing number of cell phones with GPS capability, they might be the ideal devices to provide such locational data. You could even configure one to automatically upload a track of your movements to a web service which would then match up that information with photos you upload later.

One snag would be photos taken in areas where GPS doesn’t work, such as on the subway. To deal with that, users could be presented with a few choices. The coordinates from the closest point in time where data is available could be used, a very general coordinate for the city or region in question could be substituted, or such photos could simply be left untagged.

No doubt, people could dream up some very clever ways of using this kind of data, especially once a lot of it was online. You could, for instance, produce collages of how a particular area looked over time. A mountain valley could be presented from the perspectives of everyone from those hosting afternoon picnics to those undertaking technical climbs of the peaks, with spring and summer photos contrasted against snowy winter shots. Groups of friends could also watch their trails of photos diverge and overlap, as they move around the world.

All told, it could be a very interesting experiment in communal memory.

Sorting digital music

Fence in Vermont

When it comes to the organization of music, I am probably one of the most obsessive people out there. I would actually rather delete a song I cannot properly categorize than retain it as ‘Track 1’ by ‘Unknown Artist.’ Also, once I start categorizing something such as music or photos, I cannot rest easy until the task is done. It’s a tendency I need to be aware of and careful about. The decision to tag all my iPhoto images for which friends are in them, for example, produced about three days worth of intense work.

Of course, iTunes is the ultimate enabler for music organization obsessives. It puts everything into a big database: song ratings (all my songs are rated), artists, titles, play counts, last played dates, etc. It lets you set up smart playlists that, for example, consist only of songs rated four or five stars and haven’t been played in the last two weeks. You can also tag your songs as Canadian, too obscene to be included in a random party playlist, or whatever other designations are useful to you. I have most of my good music sorted into mood-based categories, including angry, brazen, demure, dramatic, energetic, rebellious, sombre, and upbeat.

One annoying element of the age of digital music is the enduring character of mix CDs consisting of CD-style music tracks, rather than data files. Almost invariably, this means that someone somewhere converted the uncompressed music on a CD into an MP3, AAC, or WMA file. Then, someone took that compressed file and stretched it back into CD format. If you then try to re-compressed the previously compressed and de-compressed file, you encounter a notable loss of quality. It would be far better if people made mix CDs consisting of data files (those in a lossless format would be especially appreciated, and still significantly smaller than uncompressed music files).

One final annoyance I will mention is the fact that my iPod is no longer large enough to store my music collection. Since I am now about 500 megabytes beyond its capacity, I need to manually ‘uncheck’ songs so that it can synchronize properly. Beyond being a pain, this somewhat undermines the iPod concept, which is really to have all your music available at a touch. My iPod is an old 4th generation 20GB model. It was replaced four times under an extended warranty that has since expired, and it probably doesn’t have enormously more time left in the world of working gear. When it bites the bullet, I will buy something large enough to store many years worth of future musical acquisitions.

Grid technologies to support renewable power

Indistinct Vermont barn

The MIT Technology Review has a good article about renewable energy and the ways electrical grids will need to change in order to accomodate it. Both key points have been discussed here before. Firstly, we need high voltage low-loss power lines from areas with lots of renewable potential (sunny parts of the southern US, windy parts of Europe, etc) to areas with lots of electrical demand. Secondly, we need a more intelligent grid that can manage demand and store some energy in periods of excess, for use in times when renewable output falters.

The article highlights how the advantages of a revamped grid are economic as well as environmental:

Smart-grid technologies could reduce overall electricity consumption by 6 percent and peak demand by as much as 27 percent. The peak-demand reductions alone would save between $175 billion and $332 billion over 20 years, according to the Brattle Group, a consultancy in Cambridge, MA. Not only would lower demand free up transmission capacity, but the capital investment that would otherwise be needed for new conventional power plants could be redirected to renewables. That’s because smart-grid technologies would make small installations of wind turbines and photovoltaic panels much more practical. “They will enable much larger amounts of renewables to be integrated on the grid and lower the effective overall system-wide cost of those renewables,” says the Brattle Group’s Peter Fox-Penner.

In short, a smarter grid holds out the prospect of overcoming the biggest limitation of electricity: that supply must always be exactly matched to demand, and that prospects for efficient storage have hitherto been limited. The storage issue, in particular, could be profoundly affected by the deployment of large numbers of electric vehicles with batteries that could be used in part as an electricity reserve for the grid.

Providing incentives for the development of a next-generation grid (as well as removing some of the legal and economic disincentives that prevent it) is an important role for governments – above and beyond the need to put a price on carbon. While carbon pricing can theoretically address the externalities associated with climatic harm from emissions, it cannot automatically deal with the externalities holding back grid development, which include the monopoly status of many of the firms involved, issues concerning economies of scale, the fact that the absence of transmission capacity restricts the emergence of renewable generation capacity (and vice versa).

The full article is definitely worth reading.

Learning about photographic flashes

Want to learn how to use an external flash with your SLR camera system? Strobist has an useful ‘Lighting 101‘ series of articles. I have also had Light: Science and Magic by Steven Biver et al. strongly recommended to me.

Since I will be getting my hands on a 430EX II flash on Wednesday, doing a bit of pre-reading seemed sensible. The first photos I produce using it should appear here sometime after I return to Ottawa on the 28th.

The nature and future of wind power

This Economist article discusses the history, technology, and future of wind power. It includes a fair bit of useful information, particularly about integrating wind into the broader energy system:

In addition, the power grid must become more flexible, though some progress has already been made. “Although wind is variable, it is also very predictable,” explains Andrew Garrad, the boss of Garrad Hassan, a consultancy in Bristol, England. Wind availability can now be forecast over a 24-hour period with a reasonable degree of accuracy, making it possible to schedule wind power, much like conventional power sources.

Still, unlike electricity from traditional sources, wind power is not always available on demand. As a result, grid operators must ensure that reserve sources are available in case the wind refuses to blow. But because wind-power generation and electricity demand both vary, the extra power reserves needed for a 20% share of wind are actually fairly small—and would equal only a few percent of the installed wind capacity, says Edgar DeMeo, co-chair of the 20% wind advisory group for America’s Department of Energy. These reserves could come from existing power stations, and perhaps some extra gas-fired plants, which can quickly ramp up or down as needed, he says. A 20% share of wind power is expected to raise costs for America’s power industry by 2%, or 50 cents per household per month, from now until 2030.

In 2007, 34% of the new electricity generation capacity that came online in the United States was in the form of wind turbines; China has doubled its capacity every year since 2004. 20% of Danish electricity already comes from wind, along with 10% in Spain and 7% in Germany. Given aggressive construction plans in Asia, North America, and Europe, wind power definitely looks like a technology with a big future.