On fundamental physics

Grafitti near the Oxford CanalWatching this video about the Large Hadron Collider (a particle accelerator under construction at CERN), I was reminded of something I was wondering about a few weeks ago. People talk about the universe being the size of a grain of sand, or the size of a marble, in the moments immediately following the big bang. That seems comprehensible enough, but there is a fundamental problem with the analogy. The marble sized thing isn’t just all the mass in the universe, expanding into space that existed prior to the ‘explosion.’ Instead, space and time were supposedly unfurling simultaneously.

The big question, then, is how it can be said that it was expanding at all? If there was nothing to expand into, how is this process of explosion something that is comprehensible, as such? To imagine it requires a perspective where the camera is outside our universe, an idea that invalidates the notion that the big bang was the origin of our universe. And, even if our universe is embedded in a higher dimensional space, the emergence of our lower-dimensional realm still requires some explanation. I wonder if it will ever become an object of knowledge for us: both as a species with a certain amount of information about how the universe works – verified through repeated experiments and predictive power – and as a collection of individuals who almost never know more than a tiny fraction of what all people know as a collective.

The video is a bit over-hyped, as well as a transparent attempt to defend spending a great deal of money on pure research, but perhaps it will interest some people regardless. Some of the prospects associated with the LHC – such as looking for evidence of supersymmetry or investigating the nature of gravity – are very exciting indeed, from the perspective of advancing our basic understanding about the nature of matter, and the kinds of interaction that take place in our universe.

Fish paper publication upcoming

I may be delerious because it’s 6:30am, but this seems pretty unambiguous:

I really enjoyed the piece you wrote on EU policies regarding fishery sustainability off the coast of West Africa. I’d like to work with you to prepare your piece for publication in [the MIT Internatinal Review].

You mentioned on your cover letter that you would be willing to “re-focus it in the most appropriate direction and summarize other sections.” This will probably comprise the bulk of our work together, as your piece was very well written to begin with.

An excellent bit of news by which to start the day. I am off to London.

Thesis development

Talking with Dr. Hurrell about the thesis this evening was rather illuminating. By grappling with the longer set of comments made on my research design essay, we were able to isolate a number of interwoven questions, within the territory staked out for the project. All relate to science and global environmental policy-making, but they approach the topic from different directions and would involve different specific approaches and styles and standards of proof.

Thesis idea chart

The first set deal with the role of ‘science’ as a collection of practices and ideals. If you imagine society as a big oval, science is a little circle embedded inside it. Society as a whole has a certain understanding of science (A). That might include aspects like objectivity, or engaging in certain kinds of behaviour. These understandings establish some of what science and scientists are able to do. Within the discipline itself, there is discussion about the nature of science (B), what makes particular scientific work good or bad, etc. This establishes the bounds of science, as seen from the inside, and establishes standards of practice and rules of inclusion and exclusion. Then, there is the understanding of society by scientists (C). That understanding exists at the same time as awareness about the nature of the material world, but also includes an understanding of politics, economics, and power in general. The outward-looking scientific perspective involves questions like if and how scientists should engage in advocacy, what kind of information they choose to present to society,

The next set of relationships exist between scientists and policy-makers. From the perspective of policy-makers, scientists can:

  1. Raise new issues
  2. Provide information on little-known issues
  3. Develop comprehensive understandings about things in the world
  4. Evaluate the impact policies will have
  5. Provide support for particular decisions
  6. Act in a way that challenges decisions

For a policy-maker, a scientist can be empowering in a number of ways. They can provide paths into and through tricky stretches of expert knowledge. They can offer predictions with various degrees of certainty, ranging from (say) “if you put this block of sodium in your pool, you will get a dramatic explosion” to “if we cut down X hectares of rainforest, Y amount of carbon dioxide will be introduced into the atmosphere.”

The big question, then, is which of these dynamics to study. Again and again, I find the matter of how scientists understand their legitimate policy role to be among the most interesting. This becomes especially true in areas of high uncertainty. The link from “I know what will happen if that buffoon jumps into the pool strapped to that block of sodium” to trying to stop the action is more clear than the one between understanding the atmospheric effects of deforestation and lobbying to curb the latter. Using Stockholm as a ‘strong case’ and Kyoto as a ‘weak case’ of science leading to policy, the general idea would be to examine how scientists engaged with both policy processes, how they saw their role, and what standards of legitimacy they held it to. This approach focuses very much on the scientists, but nonetheless has political saliency. Whether it could be a valid research project is a slightly different matter.

The first big question, then, is whether to go policy-maker centric or scientist centric. I suspect my work would be more distinctive if I took the latter route. I suspect part of the reason why the examiners didn’t like my RDE was because they expected it to take the former route, then were confronted with a bunch of seemingly irrelevant information pertaining to the latter.

I will have a better idea about all of this once I have read another half-dozen books: particularly Haas on epistemic communities. Above all, I can sense from the energy of my discussions with Dr. Hurrell that there are important questions lurking in this terrain, and that it will be possible to tackle a few of them in an interesting and original way.

The Salmon of Doubt

One more promising bit of academic news, from the MIT International Review:

Your paper is indeed still being considered (congratulations!), having made it through a particularly rigorous selection process. You will receive a more formal note to this effect in the forthcoming days.

This is, of course, the eternal fish paper, still passing through journal selection processes on its way to eternity. So much time has now passed since I wrote that paper that it feels like a familiar alien life-form that has been observing me continuously, but which I can only properly recognize when it glances at me in a certain way. Needless to say, this is an odd relationship to have with a piece of your own work.

I am very cautiously optimistic. If the paper gets through to publication, it will be my first published work in a journal not run by the University of British Columbia.

Research design essay blasted

I just got the feedback on my research design essay, and it is enormously less positive than I had hoped. The grade is a low pass and there are two written statements included: one that is fairly short and reasonably positive, the other longer and far more scathing. It opens with “[t]his research design is not well thought out.” Both comments discuss the Stockholm Convention and Kyoto Protocol as though they are the real focus of the thesis; by contrast, they were meant to be illustrative cases through which broader questions about science and policy could be approached.

The shorter comment (both are anonymous) says that “the general idea behind the research is an interesting one” while the longer comment calls the cases “well-selected… [with] fruitful looking similarities and differences.” The big criticisms made in the longer comment are:

  1. The nuclear disarmament and Lomborg cases are unnecessary and irrelevant.
  2. I haven’t selected which key bits of the Kyoto negotiations to look at.
  3. My philosophy of science bibliography is not yet developed.
  4. Not enough sources on Kyoto or Stockholm are listed. Too many are scientific reports.

It blasts me for not yet having a sufficiently comprehensive bibliography, and for the irrelevance the commenter sees in the nuclear weapons and Lomborg examples. The whole point of those is to address the question of what roles scientists can legitimately take, and how the policy and scientific communities see the role of science within global environmental policy making. The point is definitely not, as the comment seems to assume, to compare those cases with Stockholm and Kyoto. Taken all in all, this is hands-down the most critical response to anything important I have written for quite a number of years.

To me, it seems like the major criticism is that the thesis has not been written yet. I mention being interested in the philosophy of science, insofar as it applies, but have not yet surveyed the literature to the extent that seems expected. The same goes for having not yet selected the three “instances or junctures” in the Kyoto negotiations that I am to focus on.

As is often the case when I see something I was quite confident about properly blasted, I am feeling rather anxious about the whole affair – to the point, even, of feeling physically ill. I always knew there was a lot more work to be done – a big part of why I have decided to stay in Oxford over the summer – but I expected that the general concepts behind the thesis plan were clear enough. The long comment definitely indicates that not to be the case. I can take some solace in what Dr. Hurrell has said. He has more experience with environmental issues than probably anyone else in the department and has also had the most exposure to the plotting out of my particular project. Of it, he has said: “[the] Research Design Essay represent[s] an excellent start in developing the project and narrowing down a viable set of questions to be addressed.” Still, I would be much happier if the examiners had said likewise.

The major lesson from all this is to buckle down, do the research, and prove them wrong for doubting the potential and coherence of this project. The issue is an important one, even if it is more theoretical and amorphous than many of the theses they will receive. A simple comparison of Kyoto and Stockholm would be enormously less interesting.

Potentially misleading statistics

How frequently do you see in the headlines that scientists have discovered that tomato juice reduces the chances of Parkinson’s disease, that red wine does or does not reduce the risk of heart disease, or that salmon is good for your brain? While statements like these may well be true, they tend to come together as a random collection of disconnected datasets assessed using standard statistical tools.

Of course, therein lies at least one major rub inherent to this piecemeal approach. If I come up with twenty newsworthy illnesses and then devise one clinical trial to assess the effectiveness of some substance for fighting them, I am highly likely to come up with a statistically valid result. This is in fact true even if the substance I am providing does absolutely nothing. While the placebo effect could account for this, the more important reason is much more basic:

Statistical evaluation in clinical trials is done using a method called hypothesis testing. Let’s say I want to evaluate the effect of pomegranate juice on memory. I come up with two groups of volunteers and some kind of memory test, then give the juice to half the volunteers and a placebo that is somehow indistinguishable to the others. Then, I give out the tests and collect scores. Now, it is possible that – entirely by chance – one group will outperform the other, even if they are both randomly selected and all the trials are done double-blind. As such, what statisticians do is start with the hypothesis that pomegranate juice does nothing: this is called the null hypothesis. Then, you look at the data and decide how likely it is that you got the data you did, even if pomegranate juice does nothing. The more unlikely it is that your null hypothesis is false, given the data, the more likely the converse is true.

If, for instance, we gave this test to two million people, all randomly selected, and the ones who got the pomegranate juice did twice as well in almost every case, it would seem very unlikely that pomegranate juice has no effect. The question, then, is where to set the boundary between data that is consistent with the null hypothesis and data that allows us to reject it. For largely arbitrary reasons, it is usually set at 95%. That means, there is a chance of 5% or less that the null hypothesis is true – pomegranate juice does nothing – in spite of the data which seem to indicate the converse.

More simply, let’s imagine that we are rolling a die and trying to evaluate whether it is fair or not. If we roll it twice and get two sixes, we might be a little bit suspicious. If we roll it one hundred times and get all sixes, we will become increasingly convinced the die is rigged. It’s always possible that we keep getting sixes by random chance, the the probability falls with each additional piece of data we collect that indicates otherwise. The number of trials we do before the decide that the die is rigged is the basis for our confidence level.1

The upshot of this, going back to my twenty diseases, is that if you do these kinds of studies over and over again, you will incorrectly identify a statistically significant effect 5% of the time. Because that’s the confidence level you have chosen, you will always get that many false positives (instances where you identify an effect that doesn’t actually exist). You could set the confidence level higher, but that requires larger and more expensive studies. Indeed, moving from 95% confidence to 99% of higher can often require increasing the sample size by one hundred-fold or more. That is cheap enough when you’re rolling dice, but it gets extremely costly when you have hundreds of people being experimented upon.

My response to all of this is to demand the presence of some comprehensible causal mechanism. If we test twenty different kinds of crystals to see if adhering one to a person’s forehead helps their memory, we should find that one in twenty works, based on a 95% confidence level. That said, we don’t have any reasonable scientific explanation of why this should be so. If we have a statistically established correlation but no causal understanding, we should be cautious indeed. Of course, it’s difficult to learn these kinds of things from the sort of news story I was describing at the outset.


[1] If you’re interested in the mathematics behind all of this, just take a look at the first couple of chapters of any undergraduate statistics book. As soon as I broke out any math here, I’d be liable to scare off the kind of people who I am trying to teach this to – people absolutely clever enough to understand these concepts, but who feel intimidated by them.

Lomborg on fish

I just re-read the short section on world fisheries in Bjorn Lomborg’s Skeptical Environmentalist, and noted that the level of analysis shown there is low enough to cast doubt on the rest of the book. He basically argues that:

  1. The global fish catch is increasing.
  2. We can always farm our way out of trouble.
  3. Fish aren’t that important anyhow (only 1% of human calories, 6% of protein).

He is seriously wrong on all three counts. On the matter of overall catch, that is a misleading figure, because it doesn’t take into account the effort involved in catching the fish. You could be catching more because you’re building more ships, using more fuel, etc. As long as subsidy structures like those in the EU and Japan remain, this is inevitable. While such technological advances can conceal the depletion of fish stocks, the reality remains. If we’re fishing above the rate at which a fishery can replenish itself, it doesn’t matter whether our catches are increasing or not. Or rather, it does insofar as it helps to determine how long it will be before the fishery collapses, like the cod fisheries of Newfoundland and the North Sea already have. Fisheries are also complex things. Catching X fish and waiting Y time doesn’t necessarily mean that you will have X fish to catch again. Much has to do with the structure of food webs, and thus energy flows within the ecosystem.

The idea that farming can be the answer is also seriously misleading. First and foremost, farmed fish are almost exclusively carnivorous. That means they need to be fed uglier, less tasty fish in order to grow. Since they aren’t 100% efficient at turning food into flesh, there is an automatic loss there. More importantly, if we begin fishing other stocks into decline in order to farm fish, we will just have spread the problem around, not created any kind of sustainable solution. As I have written about here before, serious pressure already exists on a number of species that are ground into meal for fish-farming. There are also the matters of how fish farms produce large amount of waste that then leaches out into the sea: biological wastes from the fish, leftover hormones and antibiotics from the flood of both used to make the fish grow faster and get sick less often in such tight proximity, and the occasional seriously diseased of genetically damaged fish escaping to join the gene pool.

I can only assume that Lomborg is right to say that “fish constitutes a vanishingly small part of our total calorie consumption – less than 1 percent – and only 6 percent of our protein intake.” Even so, that doesn’t mean that losing fisheries as a viable source of calories and protein would not be a terrible event. Humanity overall may not be terribly dependent, but certain groups of individuals are critically dependent. Moreover, the “it’s not all that important a resource anyway, so who cares if it goes?” attitude that is implied in Lomborg’s assessment fails to consider the ramifications that continuing to fish as we are could have for marine ecosystems in general and the future welfare of humanity.

One last item to identify is the fallacious nature of the 100 million tons a year of fish we can “harvest for free.” This is his estimate of the sustainable catch, and he then notes that we are only catching 90 million tons. He goes on to say that “we would love to get our hands on that extra 10 million tons.” First off, the distribution here matters. If the sustainable catch for salmon is five million tons and we are catching twenty, the overall figure doesn’t reflect the fact that salmon stocks will be rapidly destroyed. If we’re burning our way through, species by species (look at the wide variety of fish now served as ‘cod’ in the UK), then even a total catch below the aggregated potential sustainable yield could be doing irreparable harm. Secondly, we have shown no capacity for restraint as a species. Just looking at what Canada has done within its own territorial waters demonstrates that even rich governments with good scientists can make ruinous policy choices for political or other kinds of reasons.

All in all, Lomborg’s analysis is seriously misleading and lacks comprehension of the dynamics that underlie marine ecology and the human interaction with it that takes place. While my research project for the thesis partly involves examining the controversy surrounding Lomborg, I am not planning to critique his statements directly in the thesis. With passages like this included, I may be tempted.

The science of complex systems

While walking with Bilyana this morning, we took to discussing complex dynamic systems, and the capability of present-day science to address them. Such systems are distinguished by the existence of complex interactions and interdependencies within them. You can’t look at the behaviour of a few neurons and understand the functioning of a brain; likewise, you can’t look at a few ocean currents or a few cubic miles of atmosphere and understand the climatic system. The resistance of these systems to being understood through being broken down and studied piece by piece is why they pose such a challenge to a scientific method that is generally based on doing exactly that.

Murray Gell-Mann, the physicist who discovered quarks while working at the Stanford Linear Accelerator Center, extensively discusses complex dynamic systems in his excellent book: The Quark and the Jaguar. Among the most interesting aspects of that book is the discussion of the difficulty of categorizing things as simple or complex. That is to say, establishing the conditions of complexity. Some kinds of problems, for instance, are extremely complex for human beings – taking the sixth root of some large number, for instance – but facile for computers. That said, computers have a terrible time trying to perform some tasks that people perform without difficulty. The comparison of human and machine capability is appropriate because of the difficulties involved in trying to understand something like the climatic system and determine the effects that anthropogenic climate change will have upon it. Increasingly, our approach to studying such things is based on computer modelling.

Whether studying an economy, the cognitive processes of a cricket, or the dynamics of a thunderstorm, modelling is an essential tool for understanding complex systems. At the same time, a level of abstraction is introduced that complicates the status of such understanding. First of all, it is likely to be highly probabilistic: we can work out about how many bolts of lightning a storm with certain characteristics might produce, but cannot predict with exactitude the behaviour of a certain storm. Secondly, we might not understand the reasons for which behaviour we predict is taking place. Some modern aircraft use neural networks and evolutionary algorithms to dampen turbulence along their wings, through the use of arrays of actuators. Because the behaviour is learned rather than programmed, it doesn’t reflect understanding of the fluid dynamics involved in the classical sense of the word ‘understanding.’

I predict that the most significant scientific advancements in the next hundred years or so will relate to complex dynamic systems. They exist in such importance places, like all the chemical reactions surrounding DNA and protein synthesis, and they are so imperfectly understood at present. It will be interesting to watch.

Theorems and conjectures

As strongly evidenced by how I finished it in a few sessions within a single 24-hour period, Simon Singh’s Fermat’s Last Theorem is an exciting book. When you are kept up for a good part of the night, reading a book about mathematics, you can generally tell that some very good writing has taken place. Alongside quick biographies of some of history’s greatest mathematicians – very odd characters, almost to a one – it includes a great deal of the kind of interesting historical and mathematical information that one might relate to an interested friend during a long walk.

xn + yn = zn

The idea that the above equation has no whole number solutions (ie. 1, 2, 3, 4, …) for x, y, and z when n is greater than two is the conjecture that Fermat’s Last Theorem supposedly proved. Of course, since Fermat didn’t actually include his reasoning in the brief marginal comment that made the ‘theorem’ famous, it could only be considered a conjecture until it was proven across the span of 100 pages by American mathematician Andrew Wiles in 1995.

While the above conjecture may not seem incredibly interesting or important on its own, it ties into whole branches of mathematics in ways that Singh describes in terms that even those lacking mathematical experience can appreciate. Even the more technical appendices should be accessible to anyone who has completed high school mathematics, not including calculus or any advanced statistics. A crucial point quite unknown to me before is that a proof of Fermat’s Last Theorem is also automatically a proof of the Taniyama-Shimura conjecture (now called a theorem, also). Since mathematicians had been assuming the latter to be true for decades, Wiles’ proof of both was a really important contribution to the further development of number theory and mathematics in general.

Despite Singh’s ability to convey the importance of math, one overriding lesson of the book is not to become a mathematician: if you manage to live beyond the age of thirty, which seems to be surprisingly rare among the great ones, you will probably do no important work beyond that point. Mathematics, it seems, is a discipline where experience counts for less than the kind of energy and insight that are the territory of the young.

A better idea, for the mathematically interested, might be to read this book.

On caffeine

Caffeine moleculeCaffeine – a molecule I first discovered as an important and psychoactive component of Coca Cola – is a drug with which I’ve had a great deal of experience over the last twelve years or so. By 7th grade, the last year of elementary school, I had already started to enjoy mochas and chocolate covered coffee beans. When I was in 12th grade, the last year of high school, I began consuming large amounts of Earl Gray tea, in aid of paper writing and exam prep. During my first year at UBC, I started drinking coffee. At first, it was a matter of alternating between coffee itself and something sweet and delicious, like Ponderosa Cake. By my fourth year, I was drinking more than 1L a day of black coffee: passing from French press to mug to bloodstream in accompaniment to the reading of The Economist.

Unfortunately, coffee doesn’t seem to work quite right in Oxford. My theory is that it’s a function of the dissolved mineral content in the water, which is dramatically higher than that in Vancouver.

As I understand it, caffeine has a relatively straightforward method of operation. After entering the body through the stomach and small intestine, it enters the bloodstream and then binds to adenosine receptors on the surface of cells without activating them. This eventually induces higher levels of epinephrine release, and hence physiological effects such as increased alertness. Much more extensive information is on Wikipedia.

From delicious chocolate covered coffee beans used to aid wakefulness during the LIFEboat flotillas to dozens of iced cappuccinos at Tim Horton’s with Fernando while planning the NASCA trip, I’ve probably consumed nearly one kilogram of pure caffeine during the last decade or so. After the two remaining weeks of this term – and thus this academic year – have come to a close, my tight embrace with the molecule will probably loosen a bit.