Climate change on the Globe and Mail wiki

The Globe and Mail has an initiative called Policy Wiki, in which they are trying to foster web discussions on public policy issues of interest to Canadians. The third topic they have selected is climate change. The site includes a briefing note by Mark Jaccard, of the Pembina Institute, and an analysis and proposal by David Suzuki.

Some of the sub-questions to be discussed include:

  1. How closely should Canada’s policies be linked to the US?
  2. Should our focus be bilateral or multilateral?
  3. What position should Canada adopt at the Copenhagen conference?
  4. How does the economic crisis impact actions on climate change?
  5. How will this impact Canadian industry?
  6. How many green jobs can Canada create?
  7. What added responsibility does Canada have as an energy superpower?

Most frequent commenters on this site are quite concerned with Canadian climate policy. As such, this might be an opportunity to discuss the issue with a broader audience. I personally plan to contribute, and would be pleased to see readers doing so as well.

Creative Commons ‘zero’ license

It says something about the current climate of intellectual property law that Creative Commons has released a new ‘zero’ license, which strives to do everything legally possible to put a work into the public domain. The new license is meant to be an improvement over their previous public domain dedication service:

The CC0 system works better internationally, is likely more legally valid (since one can not dedicate their works into the public domain in many countries and there are questions about doing so in the U.S.) and, if the icon and meaning becomes recognizable enough, more clear.

It seems a bit remarkable that it is so difficult to choose to give intellectual property away. I can understand the importance of legal protections to ensure that people don’t do so by accident (particularly children), but it does seem as though there should be a straightforward legal mechanism to waive all rights as the creator of a work.

The contents of this site are under a Creative Commons license: specifically, one that allows anyone to copy, distribute and transmit the work, as well as produce adaptations. It requires that the work be attributed to me, that any derivative works be subject to the same rules, and does not grant these rights automatically for commercial purposes. That is to say, if someone wants to use one of my images on a personal site, with attribution, that’s fine; if Visa wants to use it in a commercial, I expect them to pay for the usage rights.

Creative Commons licenses are very valuable because they allow creators of content to establish such regimes without needing to hire lawyers or spend a lot of time and money.

On blog post timing

My current system is to produce two posts a day (sometimes one on weekends or when I am very busy). The first post includes a photo, and is generally the more substantive of the two. One post comes out at a random time between 7:00am and 8:00am Ottawa time. The other, at a random time between 6:00pm and 7:00pm.

Given that almost all the posts are written in advance, these time conventions are arbitrary. Would readers prefer for them to come out at different times? For instance, the first could be released earlier in the morning, for the benefit of those who habitually rise long before me.

Time zones are also a consideration. During the past year, there have been 22,253 visitors from across Canada, 19,789 from the USA, 5,597 from the UK, 1,314 from India, 1,297 from Australia, 642 from Germany, and less than 500 from 165 other states. The times at which posts are released matter most to regular readers, who tend to be in England (GMT), the Toronto-Ottawa area (GMT-5), and the Vancouver area (GMT-8). That means posts come out in Vancouver between 4:00am and 5:00am, as well as between 3:00pm and 4:00pm. In London and Oxford, they come out between noon and 1:00pm, as well as between 11:00pm and midnight.

Also, is the semi-random system preferable to one where they come out at the exact same moment, or would the alternative be better?

The Kindle and electronic books

Ottawa bus stop in winter

In a recent article about Amazon’s Kindle e-book reader, The Economist declared that:

It seems likely that, eventually, only books that have value as souvenirs, gifts or artefacts will remain bound in paper.

Despite being a big fan of electronic content delivery systems, I wholeheartedly disagree with this assessment. There are considerable advantages to having a personal library of physical books, and there are big disadvantages to taking your books in electronic format.

Physical books possess the many advantages of immediacy. One can display them and quickly glance through the whole collection. One can take notes in them, mark pages, stack them, pass them to others, and so forth. Collections of books are also physical representations of the reading a person has done. I often find that, when I first find myself in someone’s house, flat, or bedroom, their collection of books is the first thing I scrutinize. There is a reason why the personal libraries of intellectuals and political leaders are objects of interest, and I don’t think they would retain the same importance if they consisted of a bunch of PDF or text files.

Electronic books have the same disadvantages as other electronic media: you can’t be confident that they will be intact and accessible decades from now. Furthermore, they are often hobbled with digital rights management (DRM), which means you can never be sure that you can use them on future devices, or in various ways you might wish to. A library stored on a small device may be easier to transport, but it is a lot less trustworthy, durable, and reliable than one that you need to cart around in a heavy collection of boxes.

Electronic books can certainly complement physical ones. It would, for instance, be very valuable to be able to search electronic copies of books you own. A custom search engine, containing all the books in one’s library and that one has borrowed, would be excellent for tracking down particular passages or conducting general research. Partly for these synergistic reasons, and partly for the reasons listed above, I don’t think physical books are ever likely to become rare.

I do see much more promise for electronic periodicals. Hardly anybody wants to keep physical copies of their newspaper or magazine subscriptions on hand: especially when they are available in an easily searchable form online. If I got a Kindle, it would be for the wireless newspaper and Wikipedia access, not for the $10 book downloads.

Free lectures from top American schools online

As described in this Slate article, a new site called Academic Earth has brought together a large numbers of lecture videos and made them available online for free. Right now, it includes lecturers from Berkeley, Harvard, MIT, Princeton, Stanford, and Yale.

There is a six lecture series on Understanding the Financial Crisis.

Legal guide for bloggers

Andrea Simms-Karp winking

For those who are serious about their blogging, or simply concerned about the legal ramifications of the practice, the Electronic Frontier Foundation has a Bloggers’ Legal Guide available.

While it is focused on American law, the general principles and issues discussed are likely to be relevant elsewhere. Issues covered include intellectual property, defamation, the legal status of bloggers as journalists, and more. It also includes a page specifically for students.

People living in countries that have weaker protections for free speech might be better served by the BBC’s guide: How to avoid libel and defamation. On a side note, I certainly hope that British law evolves away from requiring the author to prove their comments were justified and towards requiring the person or organization alleging libel or defamation to prove that such things took place. The current approach encourages frivolous lawsuits and drives journalists to bury or tone down stories without due cause.

The ‘SSL strip’ exploit

Emily Horn with garlic bread

The Secure Sockets Layer (SSL) is one of the world’s most important forms of commercial encryption. It is the public key system generally employed by e-commerce websites like Amazon, in order to prevent payment details from being intercepted by third parties. At this week’s Black Hat security conference in Washington, details were released on an exploit that takes advantage of the weak way in which SSL is implemented in secure (HTTPS) websites.

The tool – called ‘SSL strip’ – is based around a man-in-the-middle attack, where the system for redirecting people from the insecure to the secure version of a web page is abused. By acting as a man-in-the-middle, the attacker can compromise any information sent between the user and the supposedly secure webpage. The author of the exploit claims to have used it to steal data from PayPal, GMail, Tickermaster, and Facebook – including sixteen credit card numbers and control of more than 100 email accounts.

This kind of vulnerability has always existed with SSL because it is difficult to be certain about where the endpoints of communication lie. Rather than having a secure end-to-end connection between Amazon and you, there might be a secure connection between you and an attacker (who can read everything you do in the clear), and then a second secure connection between the attacker and Amazon.

To some extent, the problem can be mitigated through technical means (as described in the linked article). Beyond that, the question arises of what constitutes adequate precautions, from both a legal and a personal standpoint, and who should pay the costs associated with data breaches and fraud.

[Update: 23 February 2009] The slides from the original presentation about SSL Strip are available here and here. Both servers are under a fair bit of strain, due to all the popular interest about this topic, so it may be tricky to access them during the next few days.

[Update: 25 February 2009] SSL Strip can actually be downloaded on Marlinspike’s website.

[Update: 5 November 2009] One thing I think these SSL exploits (and others described in comments below) demonstrate is that we cannot rely completely on technical means to avoid fraud and theft online. There is also a role to be played by laws on liability and other means.

Canadian content requirements for the internet?

Apparently, the Canadian Radio-television and Telecommunications Commission (CRTC) is considering Canadian content requirements for the internet. While I do support the existence of public broadcasters, I have never felt the same way about Canadian content rules for television or the radio. To me, they seem parochial and unnecessary; why does it matter whether people want to watch shows or listen to music that originated elsewhere?

Of course, the internet idea is even more dubious. Unlike radio and television, where you get to choose between channels but have no input into what each one is putting out, the internet lets you choose each film or song individually. As such, enforcing Canadian content requirements is both more intrusive and less practically feasible.

I remember when there were high hopes that the internet would be free from this sort of petty governmental manipulation. Unfortunately, with all the censorship, dubious monitoring, and other governmental shenanigans happening now, it isn’t surprising that yet another government agency wants to assert its regulatory influence over what happens online.

Hearings begin on Tuesday, with the aim of reviewing the current policy of not regulating content on cell phones and the internet.

Webs of trust in academic publishing

Geometric sculpture

Public key cryptography was a breakthrough because of the many new types of secure communication it suddenly permitted: most importantly, between people who do not have a trusted channel through which to exchage a symmetric key. Instead, it permits each partner to make a public key widely available, as well as use the public keys of others to encrypt messages that only they can decrypt.

One avenue of attack against this kind of system is for an attacker to make a public key available that they pretend belongs to someone else. For instance, you mighy try to impersonate a government or industry figure, then have people send sensitive materials to you inadvertantly. One way to prevent this kind of attack is to use key signing: an approach employed by both the commercial software PGP and the free GPG alternative. With key signing, you produce a web of trust, in which people use their own secret keys to vouch for the validity of public keys posted by others. That way, if I trust Bob and Bob trusts Jim, I can adopt that trust transitively.

GPeerReview is a system intended to extend this trust function to the review of academic work. Reviewers produce comments on documents and sign them with their keys. These comments can include different levels of endorsement for the work being scrutinized.

It is difficult to know whether the level of academic fraud that takes place justifies this sort of cryptographic response, but it seems like a neat idea regardless. Providing secure mechanisms for people to prove who they are and that things are properly attributed to them is increasingly important as technology makes it ever-easier for nefarious individuals to impersonate anyone in front of a wide audience.

Visualizing power usage

Man on bridge, Ottawa

Of late, Google has certainly committed itself to some novel and ambitious energy projects. Their PowerMeter project probably scores fairly low on the scale of ambition, but it could nonetheless be very useful. The idea is to take in data from smart electrical meters on homes and process it into a form, accessible online, that is useful for the people who live in them. It looks like it will resemble the Google Analytics system for website statistics tracking, but it will be concerned with energy usage instead. Ideally, it will be able to isolate electricity usage associated with different activities and appliances, allowing consumers to better understand how they are using power and adjust their behaviour to do so more economically and sustainably.

Particularly when paired with differing electricity prices at different times (in order to smooth out variations between times of peak demand and times of minimal demand), such a system could encourage efficiency, help to balance the grid, and reduce greenhouse gas emissions.

I certainly hope it is eventually made compatible with the smart meters Ottawa Hydro has installing. I have contacted them to ask, but am still waiting for a response.