IR theory and human nature

Magdalen College, Oxford

One thing I have always disliked about international relations theory is the tendency to assert a view of human nature as simplistic and unchangeable. Often, I think this is more the result of short descriptions of theories becoming caricatures, rather than the product of theories that genuinely fail to appreciate how human behaviour is (a) malleable within broad limits and (b) critically influenced by context. Lots of fascinating recent psychology has been demonstrating the latter point. Malcolm Gladwell’s work is an entertaining and accessible example. So too, the work on behavioural economics that has been attracting so much attention.

I have a chart on my theory notes listing the major alternatives: Realism, Liberalism, Neoliberalism, Marxism, Feminism, and Critical Theory. In the column for ‘human nature’ the positions given are: ‘Fixed (essentially selfish)’, ‘Fixed (essentially selfish)’, ‘Fixed (essentially selfish)’, ‘Historically determined (corrupted by changeable)’, ‘Varies according to sub-model’, and ‘No fixed nature.’ Firstly, it seems like the issue of whether people generally behave selfishly or not isn’t sufficient to assert the existence of an essential human nature. Secondly, it seems like virtually all IR theories could pretty easily stretch to accommodate how people’s thinking and actions are conditioned by the environment in which they live. It seems like this is one of the major reasons for which neoliberals can continue to hope that conflictual elements of world politics will eventually give way to more cooperative ones. (Of course, we can also question whether the six traditions listed above constitute an appropriate taxonomy of IR theoretical approaches.)

The tendency to caricature I mention is another feature of IR. Because the discipline seeks to cover so much, it is often simplified to a dangerous extent. Key points are pulled out from historical situations ranging from the Peloponnesian War to the Cuban Missile Crisis, while theorists are often understood on the basis of a few quotes and bullet points. In any case, I have never found international relations theory to be a terribly useful or worthwhile enterprise. Both political theory and history have a lot more to say about the major issues involved, and both seem to have a more defensible approach to dealing with them.

Aside: Richard Rorty, American philosopher and inventor of the concept of ‘ironic liberalism,’ died today.

PS. The sore throat and aggressive cough I picked up on the Walking Club trip is still very much with me. I hope it doesn’t distract those around me too much during the exams tomorrow.

‘Able Archer’ and leadership psychology

If you have any interest in nuclear weapons or security and you have never heard of the 1983 NATO exercise called ‘Able Archer’ you should read today’s featured Wikipedia article.

One fascinating thing it demonstrates is the amazing willingness of leaders to assume that their enemies will see actions as benign that, if they had been taken by those same enemies, would be seen as very aggressive. Case in point: the issues America is raising about Iranian intervention in Iraq. If Iran was involved in a major war on America’s doorstep, you can bet that there would be American intervention. This is not to assert any kind of moral equivalency, but simply to state the appallingly obvious.

An afternoon game

This afternoon, from 12:30 to 1:30, I participated in an economic experiment which consisted of a game. Within the game, there were three groups of five. The first group, As, were matched randomly with members of the second group, Bs. Each of these players started with 35 tokens, each worth 1/5th of a Pound. There was a third group, Cs, who got 25 tokens.

The game was only played once (ie. not iterated).

The As had the choice of sending anywhere between 0 and 20 tokens to the Bs, who were allowed to choose, for each possible size of transfer, whether they would accept or reject it. If the B accepted, the A got 50-X tokens, where X was the size of the transfer. (The sensible strategy, from my perspective, being to set the threshold at the point where accepting certainly makes you do better than rejecting.) The B, in this case, would get 30+X. If the B rejected, the B would keep 35 tokens and the A would lose one. For each A-B pair where a transfer took place, all Cs lost one token. Cs did not make any choices over the course of the game.

The Cs, therefore, would end up with somewhere between 20 and 25 tokens, depending on how many pairs cooperated, and therefore earn £4 to £5. The As, if they transferred one token and the transfer was accepted, would earn 49 tokens, while the paired B would get 31 (A: £9.80, B: £6.20). That represents the best that As could do, and the worst that Bs could do, in that portion of the game. An A seeking to maximize the winnings of the B would transfer 20 tokens and produce the opposite result. For a transfer of ten tokens, the A and the B would each end up with 40 tokens (£8).

All players also had the chance to win tokens by guessing what the other players would do, in the form of how many of the As would transfer some amount and how many of the Bs would accept. Getting one right earned you 50p and getting both right earned you £1. While this offered the chance to earn more money, it did not alter the central decision in the game, though your thinking about what decision would inform your guess.

My thinking was that, firstly, every A would make a transfer because the worst they could do is lose four tokens and they could gain as many as 19. Additionally, each B would accept a transfer, for precisely the same reason. Moreover, it would be awfully boring to sit in a room for an hour listening to rules and then not actually play the game in an active way.

I was an A, one of the two actively deciding groups. I decided to transfer 7 tokens, one above the minimum amount where the payoff to the B of accepting exceeded the amount that would be had from rejecting. For a B, accepting 7 tokens means earning £7.40, while rejecting it would mean getting £7. That said, for the B to accept costs all five Cs one token each, for a total loss among the Cs of £1. For the A, transferring seven tokens means getting £8.60 if the transfer is accepted and £6.80 if it is rejected (which would be against the interest of the B, provided they don’t care about the Cs).

In the end, I won £7.30, which means that my offer was rejected but that I guessed properly that the four other As would all make an offer. In addition to the £7.30, I got £3 just for playing.

The outcome of my section of the game, therefore, left me with £6.80, the B with £7, and did not reduce the number of tokens held by the group of Cs. Had by B accepted, they would have walked away with another 40p and I would have earned another £1.60. Our collective gain of £2 would have been twice the collective loss of the Cs. I suppose either concern for the Cs or the fact that I would earn more from the transaction caused them to reject my strategy of the minimum offer for clear mutual gain.