Framing, selection, and presentation issues

Harris Manchester College, Oxford

One of the major issues that arises when examining the connections between science and policy are the ways information is framed. You can say that the rate of skin cancer caused by a particular phenomenon has increased from one in ten million cases to one in a million cases. You can say that the rate has increased tenfold, or that it has gone up by 1000%. Finally, you could say that an individual’s chances of getting skin cancer from this source have gone up from one tiny figure to a larger, but still tiny seeming, figure. People seem to perceive the risks involved in each presentation differently, and people pushing for one policy or another can manipulate that. This can be especially true when the situations being described are of not comparably rare: having your chances of being killed through domestic violence reduced 1% is a much greater absolute reduction than having your chances of dying in a terrorist attack reduced by 90%.

Graphing

When talking about presentation of information, graphs are an important case. Normally, they are a great boon to understanding. A row of figures means very little to most people, but a graph provides a wealth of comprehensible information. You can see if there is a trend, what direction it is in, and approximately how strong it is. The right sort of graph, properly presented, can immediately illuminate the meaning of a dataset. Likewise, it can provide a compelling argument: at least, between those who disagree more about what is going on than how it would be appropriate to respond to different situations.

People see patterns intuitively, though sometimes they see order in chaos (the man on the moon, images of the Virgin Mary in cheese sandwiches). Even better, they have an automatic grasp of calculus. People who couldn’t tell you a thing about concavity and the second derivative can immediately see when a slope is upwards and growing ever steeper: likewise, one where something is increasing or decreasing, but at a decreasing rate. They can see what trends will level off, and which ones will explode off the scale. My post on global warming damage curves illustrates this.

Naturally, it is possible to use graphs in a manipulative way. You can tweak the scale, use a broken scale, or use a logarithmic scale without making clear what that means. You can position pie charts so that one part or another is emphasized, as well as abuse colour and three dimensional effects. That said, the advantages of graphs clearly outweigh the risks.

It is interesting to note how central a role one graph seems to have played in the debate about CFCs and ozone: the one of the concentration of chlorine in the stratosphere. Since that is what CFCs break down to produce, and that is what causes the breakdown of ozone, the concentration is clearly important. The graph clearly showing that concentrations would continue to rise, even under the original Montreal Protocol, seems to have had a big impact on the two rounds of further tightening. Perhaps the graph used so prominently in Al Gore in An Inconvenient Truth (the trends on display literally dwarfing him) will eventually have a similar effect.

Stats in recent personal experience

My six-month old Etymotic ER6i headphones are being returned to manufacturer tomorrow, because of the problems with the connector I reported earlier. Really not something you expect for such a premium product, but I suppose there are always going to be some defects that arise in a manufacturing process. Of course, being without good noise isolating headphones for the time it will take them to be shipped to the US, repaired or replaced, and returned means that reading in coffee shops is not a possibility. Their advantage over libraries only exists when you are capable of excluding the great majority of outside noise and of drowning the rest in suitable music.

Speaking of trends, I do wonder why so many of my electronics seem to run into problems. I think this is due to a host of selection effects. I (a) have more electronics than most people (b) use them a great deal (c) know how they are meant to work (d) know what sort of warranties they have and for how long (e) treat them so carefully that manufacturers can never claim they were abused (f) maintain a willingness to return defective products, as many times as is necessary and possible under the warranty. Given all that, it is not surprising that my own experience with electronics failing and being replaced under warranty is a lot greater than what you might estimate the background rate of such activity to be.

Two other considerations are also relevant. It is cheaper for manufacturers to rely upon consumers to test whether a particular item is defective, especially since some consumers will lose the item, abuse it, or simply not bother to return it even if defective. Secondly, it is almost always cheaper to simply replace consumer electronics to fix them, because of the economies of scale involved in either activity. From one perspective, it seems wasteful. From another, it seems the more frugal option. A bit of a paradox, really.

[14 March 2007] My replacement Etymotic headphones arrived today. Reading in coffee shops is possible again, and none too soon.

Exploring Oxford colleges

At the same time as the second chapter of my thesis is firming up, my initiative to visit and photograph all 39 colleges is proceeding apace. Today, I visited Somerville College (where Margaret Thatcher read chemistry, a factor that may have contributed to her eventual strong support for CFC regulation, despite her ideological leanings) as well as Kellogg, St. Peter’s, and Lincoln. Only Linacre, Mansfield, Oriel, Pembroke, St Cross, St Hilda’s, and Templeton College have been spared from my lenses as of now. That said, not all the photos I have taken in recent days have had the chance to be posted yet. When one is mired in academic work, it is good to have a reserve. Likewise, it is good to have a pattern of exploration, using a quad or coffee shop here or there to read a chapter or two, before moving on to the next target.

A tip for fellow explorers: make sure you speak to the porters, before wandering in. Particularly in the less well known colleges, they will be happy to let you in if you tell them that you are a student at a different college and have been wanting to have a look at some of those you haven’t seen previously. Among all the colleges I have visited so far, the porters at Kellogg and Lincoln have been the most helpful. The only colleges that have refused me admission (or demanded money) are Christ Church and Magdalen. While I understand that they risk being besieged by tourists, it is hardly appropriate to bar the foreign graduate students who are subsidizing their fine stonework and scores of undergraduates.

In any case, I expect that the collection will be complete by the time this site gets its 50,000th visit. That should be within the next two weeks, at which time I will be spending my days fretting about drafting chapter three.

On darkness

A student housing vignette

When one of your circuit breakers blows, you need to go ask your landlords for the key to the cupboard where the switches are. When the light bulb in one room burns out, you use candles until the college replaces it.

The first, you can really do very little about. The second reflects the transitivity of the whole experience.

Favourite reading spots in Oxford

Queen’s College, Oxford

One of the most telling things about a person’s personality may be which places they choose to do the masses of reading meant to dominate the lives of an Oxford student. There is a certain sort that appreciates the reading rooms in the Bodleian (and another sort forced there due to the location of necessary materials). Some people like the cold modernism of the Social Sciences Library, while others adore the grandeur of the Upper Camera.

Personally, I tend to stick to a collection of locations around the centre of town. These include the Wadham Library (more for a sense of connection to the college than because it is attractive or has useful materials), the Wadham MCR, the Codrington Library, the Upper Camera, and the Starbucks locations on the High Street and Cornmarket Street. Sometimes, if it is nice, the Wadham gardens get added to the rotation, especially the little area at the western edge of the private fellows’ garden that is a bit obscured by plants. That said, I probably read more in my room than in all other places put together – enormously more if you include things read on the computer.

What places do other residents appreciate? Are there any that I simply must try during the 125 days that remain to me here?

The identification of environmental problems

The identification of an environmental ‘problem’ is not a single crystalline moment of transition, from ignorance to understanding. Rather, it is ambiguous, contingent, and dependent upon the roles and modes of thinking of the actors involved, and values that inform judgments. Rather like Thomas Kuhn’s example about the discovery of oxygen (with different people accessing different aspects of the element’s nature, and understanding it in different contexts), the emergence of what is perceived as a new environmental problem occurs at the confluence of facts, roles, and existing understandings. While one or more causal connections ultimately form the core of how an environmental problem is understood, they are given comprehensibility and salience as the result of factors that are not strictly rational. From the perspective of global environmental politics and international relations, environmental problems are best understood as complexes of facts and judgments: human understandings that are subjective and dynamic, despite how elements of their composition are firmly grounded in the empirical realities of the world.

POPs and climate change

Consider first the case of persistent organic pollutants (POPs). The toxicity of chemicals like dioxins was known well before any of the key events that led to the Stockholm Convention. At the time, the problem of POPs was largely understood as one of local contamination by direct application or short distance dispersal. It took the combination of the observation of these chemicals in an unexpected place, the development of an explanation for how this had transpired, and a set of moral judgments about acceptable and unacceptable human conduct to form the present characterization of the problem. That understanding in turn forms the basis for political action, the generation of international law, and the investigation of techniques and technologies for mitigating the problem as now understood. Even now, the specific chemicals chosen and the particular individuals whose interests are best represented are partly the product of political and bureaucratic factors.

If we accept former American Vice President Al Gore’s history of climate change, the form of problem identification is even more remarkable. He asserts that the discovery of rising atmospheric CO2 concentrations by Roger Revelle in the 1960s, rather than of specific changes to the global climatic system directly, were what prompted the initial concern of some scientists and policy makers. This is akin to how the 1974 paper by Mario Molina and F.S. Rowland established the chemical basis for stratospheric ozone depletion by CFCs which, in turn, actually led to considerable action before their supposition was empirically confirmed. Gore’s characterization of the initial discovery of the climate change problem also offers glimpses into some of the heuristic mechanisms people use to evaluate key information, deciding which arguments, individuals, and organizations are trustworthy and then prioritizing ideas and actions.

Definition and initial implications

For the present moment, environmental ‘problems’ will be defined as being the consequences of unintentional (though not necessarily unanticipated) side effects of human activity in the world. While mining may release heavy metals into the natural environment, this didn’t crystallize in the minds of people as a problem until the harm they caused to human beings and other biological systems proved evident. While the empirical reality of heavy metal buildup may have preceded any human understanding of the issue, it could not really be understood as an environmental problem at that time. It only became so through the confluence of data about the world, a causal understanding between actions and outcomes, and moral judgments about what is right or desirable. Likewise, while lightning storms cause harm both to humans and other biological systems, their apparent status as an integral component of nature, rather than the product of human activities, makes them something other than an environmental problem as here described. Of course, if it were shown, for example, that climate change was increasing the frequency and severity of thunderstorms (a human behaviour causing an unwanted outcome, though a comprehensible causal link) then that additional damage could be understood as an environmental problem in the sense of the term here used.

Worth noting is the possibility of a dilemma between two sets of preferences and understandings: the alleviation of one environmental problem, for instance by regulating the usage of DDT, may reduce the scope to which another problem can be addressed, such as the possibility of increased prevalence of malaria in a warmer world. It is likewise entirely possible that different groups of people could ascribe different value judgments to the same empirical phenomena. For instance, ranchers and conservationists disagree about whether or not it is desirable to have wild wolves in the western United States.

Problem identification, investigation, and the formulation of understandings about the connections between human activity and the natural world do not comprise a linear progression. This is partially the product of how human psychological processes develop and maintain understandings about the world and partly the consequence of the nature of scientific investigation and political and moral deliberation. Existing understandings can be subjected to shocks caused by either new data or new ideas. Changed understandings in one area of inquiry can prompt the identification of possible problems in another. Finally, the processes and characteristics of problem investigation are conditioned by heuristic, political, and bureaucratic factors that will be discussed at greater length below.

Problematizing the origin of environmental problems as human understandings does not simply add complexity to the debate. It generates possibilities for a more rigorous understanding of the relationship between human beings and nature (including perceptions about why the two are so often seen as distinct). It also offers the possibility of dealing with dilemmas like the example above in a more informed and effective manner.

The Oxford Botanic Gardens

Magdalen College, Oxford

As places in Oxford go, the Botanic Gardens across from Magdalen College are a real jewel. They are peaceful, beautiful, intellectually engaging, and easily capable of yielding dozens of good photos, even if you have been there many times before.

During Oxford’s long winter, the greenhouses make for a particularly nice contrast both with the world outside and with each other. Some are arid and no warmer than the air outside (largely filled with interesting cacti); others are almost tropical in warmth and humidity and contain many plants normally seen only as foodstuffs – from coffee to black pepper, tea and ginger. Some of the flowers are also quite dramatic.

The gardens are free for students and university staff all year round, and simply open to everyone during the winter.

PS. The connector on my snazzy Ety headphones seems to be broken. Jostling it around even a little bit causes the sound to cut out on one side or the other. I will call them on Monday about having them repaired.

Reading fiction aloud

Saint Catherine’s College, Oxford

I attended a sustainability forum in Wadham tonight, followed by a fancy dinner. I even got to see a well situated and previously unexplored room in college. Much more enjoyable, however, was spending a couple of hours later in the night reading aloud from Stanislaw Lem’s Mortal Engines, Simon Singh’s Fermat’s Last Theorum, Vladamir Nabokov’s Lolita, Jack Kerouac’s On the Road, and chapters 2-47 of Mark Haddon’s The Curious Incident of the Dog in the Night-Time.

I really love fiction, and quite enjoy reading aloud. With unfamiliar text, it can be quite challenging, even in the best of circumstances. You need to develop an intuition for the shape of an author’s phrases, so that you can start speaking the first portion without reading the end. Perhaps, that explains why I appreciate Nabokov so much and never enjoyed Faulkner. I don’t think you could read the latter aloud, except in halting steps where an entire sentence was decoded before the first syllable was uttered.

Making a hash of things

The following is the article I submitted as part of my application for the Richard Casement internship at The Economist. My hope was to demonstrate an ability to deal with a very technical subject in a comprehensible way. This post will be automatically published once the contest has closed in all time zones.

Cryptography
Making a hash of things

Oxford
A contest to replace a workhorse of computer security is announced

While Julius Caesar hoped to prevent the hostile interception of his orders through the use of a simple cipher, modern cryptography has far more applications. One of the key drivers behind that versatility is an important but little-known tool called a hash function. These consist of algorithms that take a particular collection of data and generate a smaller ‘fingerprint’ from it. That can later be used to verify the integrity of the data in question, which could be anything from a password to digital photographs collected at a crime scene. Hash functions are used to protect against accidental changes to data, such as those caused by file corruption, as well as intentional efforts at fraud. Cryptographer and security expert Bruce Schneier calls hash functions “the workhorse of cryptography” and explains that: “Every time you do something with security on the internet, a hash function is involved somewhere.” As techniques for digital manipulation become more accessible and sophisticated, the importance of such verification tools becomes greater. At the same time, the emergence of a significant threat to the most commonly used hashing algorithm in existence has prompted a search for a more secure replacement.

Hash functions modify data in ways subject to two conditions: that it be impossible to work backward from the transformed or ‘hashed’ version to the original, and that multiple originals not produce the same hashed output. As with standard cryptography (in which unencrypted text is passed through an algorithm to generate encrypted text, and vice versa), the standard of ‘impossibility’ is really one of impracticability, given available computing resources and the sensitivity of the data in question. The hashed ‘fingerprint’ can be compared with a file and, if they still correspond, the integrity of the file is affirmed. Also, computer systems that store hashed versions of passwords do not pose the risk of yielding all user passwords in plain text form, if the files containing them are accidentally exposed of maliciously infiltrated. When users enter passwords to be authenticated, they can be hashed and compared with the stored version, without the need to store the unencrypted form. Given the frequency of ‘insider’ attacks within organizations, such precautions benefit both the users and owners of the systems in question.

Given their wide range of uses, the integrity of hash functions has become important for many industries and applications. For instance, they are used to verify the integrity of software security updates distributed automatically over the Internet. If malicious users were able to modify a file in a way that did not change the ‘fingerprint,’ as verified through a common algorithm, it could open the door to various kinds of attack. Alternatively, malicious users who could work backward from hashed data to the original form could compromise systems in other ways. They could, for instance, gain access to the unencrypted form of all the passwords in a large database. Since most people use the same password for several applications, such an attack could lead to further breaches. The SHA-1 algorithm, which has been widely used since 1995, was significantly compromised in February 2005. This was achieved by a team led by Xiaoyun Wang and primarily based at China’s Shandong University. In the past, the team had demonstrated attacks against MD5 and SHA: hash functions prior to SHA-1. Their success has prompted calls for a more durable replacement.

The need for such a replacement has now led the U.S. National Institute of Standards and Technology to initiate a contest to devise a successor. The competition is to begin in the fall of 2008, and continue until 2011. Contests like the one ongoing have a promising history in cryptography. Notably, the Advanced Encryption Standard, which was devised as a more secure replacement to the prior Data Encryption Standard, was decided upon by means of an open competition between fifteen teams of cryptographers between 1997 and 2000. At least some of those disappointed in that contest are now hard at work on what they hope will become one of the standard hash functions of the future.