As a lecturer, the University of Toronto’s Jordan Peterson is quite something. Yesterday, Tristan showed me videos of a couple of his lectures. One of them – The Necessity of Virtue – is available online.
One thing I found striking about the talks (which are mostly about psychology and ethics) is just how much we know about the brain, and how much we can reduce seemingly complex human behaviours and experiences to be predictable operation of certain brain structures. I had not previously realized the full importance of the hypothalamus. In one particularly grim example, Peterson explains that a cat stripped of almost all of its brain, but left with a spinal cord and a hypothalamus, will still behave much like an ordinary cat, except that it will be unusually likely to explore and unable to mate (if male).
What humanity is learning about the brain (which seems to produce the mind) seems likely to have considerable importance both for understanding the world in important ways and for deciding how to act in it. I will be adding Peterson’s Maps of Meaning: The Architecture of Belief to my reading list, and may even be able to finagle a way to audit one of his courses if I do move to Toronto.
I look forward to discussing “Maps of Meaning”.
It’s an expensive book: $57.51 for the paperback on Amazon, and $111.82 for the hardcover.
A newfound chemical drives male squid berserk, and the molecule appears similar to ones seen in humans, scientists now say.
Future research might investigate whether comparable human semen proteins have similar effects, investigators added.
Scientists investigated the longfin squid (Loligo pealeii), which live for nine to 12 months, usually mating and laying eggs in the spring, when the animals migrate from deep offshore waters to shallower waters along the Eastern Seaboard from North Carolina to Maine. Females mate several times with multiple males, who compete fiercely over females.
@Milan,
I already forwarded you a PDF copy of MOM. If you want, I can send it again. Or, I can send it to anyone else interested.
Where did you forward it to? My GMail?
Yes. If you think of people, we have very little hand programming. We have a brain, we get inputs and after a while we figure it out. By five you have pretty much made sense of the world in terms of understanding language and what objects are and stuff like that.
So you are trying to get computers to learn the way a baby learns?
Basically, we want to understand how the cerebral cortex of the brain learns. The most interesting thing about the cortex is it all looks pretty much the same. So, what we have in there is a general-purpose learning algorithm, something that will take input from the senses and make sense of it. We use the same algorithm for both vision and speech.
How do you know this?
Researchers have rewired ferrets’ brains so that the visual input is sent to the bit of the cortex that would normally deal with sound. And this bit learns to do vision. It has been prewired to deal with sensory input, but not necessarily sound. So the reason different regions of the brain do different things is mainly because of what they are connected to.
“Researchers have rewired ferrets’ brains so that the visual input is sent to the bit of the cortex that would normally deal with sound. And this bit learns to do vision. It has been prewired to deal with sensory input, but not necessarily sound. So the reason different regions of the brain do different things is mainly because of what they are connected to.”
This is what we call in philosophy a “hasty generalization”. They are common in science – science is great at uncovering phenomena, but its descriptions are often hurried and unreflective. I’m not saying this contains false claims; I’m saying it’s not a coherent or persuasive argument.
To be fair, that is a media quote. If you want a detailed and thoughtful analysis, go back to the original published research.
What kind of research could demonstrate that the grey matter is “running an algorithm”? It’s a hypothesis, which can’t be proved in advance – the usefulness of the hypothesis will be retrospectively confirmed or falsified based on the fruitfulness of future research. However, I’d suggest that it is unlikely that the brain is “running an algorithm”, purely on the basis that brains (and much lower level biological phenomena) persist and evolve on the basis of their ability to deal with the unknown without having to check the situation against a bank of previously accumulated data. Such checking is slow – and biological computers are notoriously inept at rule calculations. Biological organisms succeed or fail in the world on the basis of their ability not to work like computers. Every attempt to use a computer to biological complex integration in situations makes this error, and I think this is a major reason AI is still a joke.