Experimenting on model brains

Milan and Paul in a diner

While taking the bus back from Toronto last night, I found myself wondering again about the brain-in-a-computer issue. While there are legitimate doubts about whether it would ever actually be possible to build a model akin to a human brain inside a machine, it is already the case that people are building successively better (but still very poor) approximations. Eventually, the models may become good enough for the following ethical question to arise.

What I wondered about, in particular, was the ethics of experimenting on such a thing. I have heard people mention, from time to time, the possibility of a ‘grandmother neuron’ charged specifically with recognizing your grandmother. The idea seems very unlikely, given that neurons die with regularity and people rarely completely and exclusively forget how to see their grandmothers. That being said, there is lots of experimental evidence that brain injuries can produce interesting results. As a consequence, the unfortunate, brain-damaged victims of car crashes sometimes find themselves to be the focus of intense interest among cognitive and behavioural psychologists.

If we did have a model brain (say a semi-realistic model fly or beetle brain), we could experiment by disabling sections of it to model the effects. By extension, the same could be done with rat, monkey, or human brains. The question then becomes: is there an ethical difference between experimenting on a mathematical model that behaves like a human brain and experimenting on a real human brain? Does that distinction lie in the degree to which the model is comprehensive and accurate? For instance, a good model brain might respond with terror and confusion if experimented upon.

This is yet another way of getting at the whole ethical question of whether people are simply their material selves, or whether there is something metaphysical to them. I maintain extremely strong doubts about the latter possibility, but still feel that there is an ethical distinction between experimenting on crude or partial brain models and experimenting on complete ones or real brains. I am much less sure about whether there is a meaningful ethical distinction between the last two options.

Author: Milan

In the spring of 2005, I graduated from the University of British Columbia with a degree in International Relations and a general focus in the area of environmental politics. In the fall of 2005, I began reading for an M.Phil in IR at Wadham College, Oxford. Outside school, I am very interested in photography, writing, and the outdoors. I am writing this blog to keep in touch with friends and family around the world, provide a more personal view of graduate student life in Oxford, and pass on some lessons I've learned here.

7 thoughts on “Experimenting on model brains”

  1. Does that distinction lie in the degree to which the model is comprehensive and accurate? For instance, a good model brain might respond with terror and confusion if experimented upon.

    I would say so. A model brain that responds like a human brain has as much claim to being treated ethically as any ‘real’ human.

  2. I wanted to say “A forgery is no less a forgery for its being a perfect imitation of the original”, but we can synthesize life now so it seems a little behind the times.

    What I want to say instead is that why would we be so presumptuous that AI will develop by imitating a human brain? Human brains have the properties they do because of many generations of evolution on very particular hardware – the neural network of our brain. Computers run on chips which have very different properties – mostly because they alter themselves in different ways. Perhaps we could model those modifications in the software level, but it’s unclear then that we’ll ever get past the modelling of thought to its enactment.

    It seems to me that if we reach AI it will be because we find the kind of intelligence which is appropriate and proper to the kinds of machines we are trying to make intelligent. What “ethics” will be relavent is unknowable because ethics have to do with humans, and since we’re not making a human, our ethics don’t apply to it as a human but as a non-human.

    Back to forgery – this does relate. A forgery of a human brain is just a copy, a human brain is not just an image of a human brain but a live human brain. It is not a representation. Representation is a way we talk about mental states, we have no special reason for thinking its how the brain really operates (it’s quite a modern way of describing how it feels to be a human, and its unclear why it would “feel” the same way to be a thinking computer).

    This brings us to, what does it mean to think as opposed to merely follow orders? If someone follows orders blindly, we say he does not “think for himself” – and we find him culpable not because he acted wrongly but because he failed to act rightly, he failed to be the source of his own action. The human is not always free but it can always make itself free by making itself the site of decision. (i.e. it can always be you who is doing what “you are doing”, but this is no guarantee that you are actually doing your acts initially and for the most part).

    If it’s the same for AI, then what it means for a computer system to think is for it to carry out actions not according to orders given to it from the outside but according to laws or principles or reasons that it gives itself, or that it adopts as its own, that it makes its own. What principles would be appropriate to a computer? This is probably unknowable by us, but it would be that appropriateness which would be the origin of any AI ethics.

  3. Some looking at making machines react (as opposed to think) more ‘lifelink’ have gone for hybridisation. Conversation with some people a few months back about cars which drive themselves led onto researchers incorporating rat brain cells into controllers for independenly mobile machines to give intelligent touch feedback systems – hybrots. (first developed 1993ish) Googling shows this has been active since before 2002 (at least one 1998 article on rat-robots). There was more research on semi-living entities but a lot has beeen ratcheted down because of animial experimentation concern. Sheffield Uni is now working on producing animal-type touch feedback systems without the animal cells. Rumour has it they had been wanting to look at hybrots but the relevant teams lost their funding due to nerves over the animal experimentation aspect (I don’t think anyone was talking cruelty to braincells there, but the cells came from somewhere.)

  4. btw ‘ethical question of whether people are simply their material selves, or whether there is something metaphysical to them.’

    Philosophical rather than ethical question surely?

  5. Antonia,

    “Philosophical rather than ethical question surely?”

    Ethics is a subdivision of philosophy, but it is not only that. Philosophy is not some obscure practice done by men in tweed jackets, any more than chemistry is an obscure science done by men in white coats – the purpose of philosophy is to make explicit the implicit horizon of understanding already present in our everyday beliefs. In this case, the purpose is to determine whether there is something about the human which supervenes on its material parts, if its material description is incomplete, if there is something we cannot deny about the human which cannot be better described with matter and energy based descriptions.

  6. The question surely comes down to whether or not the model was conscious/had experience. Since we can’t answer this, I suppose it will come down to whether or not we believe the model to be conscious.

    Consciousness is surely the most difficult question facing both philosophy and science. I find it totally mysterious.

Leave a Reply

Your email address will not be published. Required fields are marked *