Skip to main content

"That which I cannot build, I do not truly understand" -- Richard Feynman

In 2006, IBM Research hosted a series of lectures on Cognitive Computing, featuring presentations from some well-known researchers in neuroscience and cognitive computing. Videos of the lectures and the presentations that were given are available at A word of caution, however: as one person in the audience commented in a Q&A session after a panel presentation, a number of the presentations were more 'neuromythology' (i.e. bravado, marketing, speculation and wishful thinking) than neuroscience. I did learn a number of things from a few of the presentations, however, and will try to summarize the good stuff and ignore the rest in the next few posts.

The presentation by Henry Markram, EPFL/BlueBrain: The Emergence of Intelligence in the Neocortical Microcircuit (video) describes the Blue Brain project that Markram was director of at the time, which aimed to create a computer model of the neurons in a cortical column using a supercomputer to model each neuron and networking over 8000 of these supercomputer nodes together using MPI (Message Passing Interface - an industry standard messaging protocol for parallel computing). "Phase 1" of this work was completed in 2007.

Markram and his team's work was a technological Tour de Force, tackling some incredibly daunting challenges head on (ahem). For this post, I'd like to narrow the focus to some of the things I learned about spiking neuron models from Markram's presentation, and link some of these concepts to some of the things covered in previous posts. The images below are from Markram's presentation.

Re: Perkinjes and Granules and Schwanns, oh my...
Each neuron is unique, but when you look at a large number of them (as you need to do when you contemplate trying to model a 10,000 neuron cortical column!) you start to see similarities between the various neurons, enough so that you can classify them by shape:

Each of these classes of neurons can exhibit a wide variety of electrical behaviours:

Re: Ion Channels: gates in the cell wall and Receptors: getting the message across:

One of the factors that determines the electrical behaviour of a neuron is the combination of ion channels that it supports. You can determine which ion channels a particular neuron has 'implemented' by harvesting the neuron's cytoplasm, extracting the mRNA strands, performing reverse transcription and identifying all of the genes that code for ion channels.

Re: Will you remember me? I will remember you...
For all of the amazing fidelity and accuracy of the neuron models being used to create Blue Brain, there are a number of things that it doesn't tackle: e.g. the internal cellular biology of the neurons and the ability of a neuron to grow or modify its dendritic spines. At this stage ("Phase 1"), the Blue Brain project focused on creating a static snapshot in time of the neurons in the cortical column.

From the Blue Brain FAQ:

Q: How will you be able to replicate the complexity of neurons and neurotransmitter actions?

A: We have built 3D computer models of most of all the main types of neurons and can simulate their individual behaviors with great detail and very accurately. At this stage we can capture the complexity of the fast neurotransmitters very accurately as well with phenomenological models that we have built. A more difficult issue is the slow neurotransmitters and the neuromodulators as well as hormonal effects. These will take a while longer to model, but there is no major obstacle to this.

Q: What is the difference between cellular and molecular simulation?

A: The cellular level is a form of phenomenological model of the underlying molecular processes - a simplification - so it does capture many key processes, but molecular interactions are of course very complex and they keep neurons on a growth trajectory (real neurons are never biochemically stable), whereas in the simulations, neurons will tend to go back to a resting position when not activated. A very important reason for going to the molecular level is to link gene activity with electrical activity. Ultimately, that is what makes neurons become and work as neurons - an interaction between nature and nuture.

Two other questions that the FAQ doesn't address: What will "Phase 2" focus on and when will it get underway? A couple of news items provide a bit of a glimpse of what's next:

From IEEE Spectrum's TechTalk:
David Cremese, the manager of Deep Computing Programs at IBM Zurich, told me that the first phase of Markram's project is complete but that IBM intends very much to collaborate on future phases.

From TechnologyReport:
Technology Report has confirmed with IBM Switzerland that the Blue Brain project is waiting for Phase II funding from the Swiss Government. See the statement from Blue Brain project director Henry Markam ... as quoted by IBM Switzerland to Technology Report on January 19, 2009:

The funding:
There is a serious misconception that IBM somehow funded or donated to support the Blue Brain Project. The BBP project is funded primarily by the Swiss government and secondarily by grants and some donations from private individuals. The EPFL bought the BG, it was not donated to the EPFL. It was at a reduced cost because at that stage it was still a prototype and IBM was interested in exploring how different applications will perform on the machine - we were a kind of beta site.

The Collaboration:
The Blue Brain Project is a project that I conceived over the past 15 years. I chose the name because of the Blue Gene series which is a fantastic architecture for brain simulations. When we bought the BG, we also had to make sure that we have the computer engineering and computer science expertise to run the machine and optimize all the programs. So BG came to us with IBM’s full support as a technology partner. This component of the collaboration is invaluable to the Project and will continue and grow as long as we have a Blue Gene or other architectures from IBM. This
is by far the major component of the collaboration.

IBM Research at T.J. Watson, also contributed a postdoc that was sent to work with us at the EPFL and assigned a researcher at Watson to work on some computational neuroscience tasks. The research and term assigned to these postdocs is done, a success and published. Actually, the term expired almost a year ago, and the IBM postdoc, Sean Hill, actually transfered and is now an employee of the BBP and not IBM. The researcher at TJ Watson worked on a specific problem of collision detection between the axons and dendrites and this is done very well and already published. Although very important projects and contributions, this is a small part of the BBP which is carried out at the EPFL and involves, neuroscience, neuroinformatics,
vizualization, and a vast spectrum of computational neuroscience.

BBP needs BG’s to continue the project. The architecture is perfect for brain simulations. When we manage to get our funding to buy the next BG/P finalized, we will start Phase 2 and that will of course involve the basic (and most significant) technology collaboration, and most likely also many new collaborations on specific research targeted topics where we see that IBM can, and would like to, contribute. So this is an intermediate phase while we get ready for phase 2 - molecular level modeling.

BBP sees IBM as a key partner in the BBP and I do think that IBM also sees the value in the BBP. We are getting ready for Phase 2, but it has not started until we get the next BG series.

One further hint as to what Markram might be thinking about for Phase 2 is alluded to in an aside Markram made on "Microcircuit plasticity" 48 minutes into his presentation (slide 56):
"We patched 6 cells, and we see how they're connected so we can define the circuit [they make]. Now we take the pipettes out and we wait 12 hours, and we re-patch it. And what we found is that the circuit was different. Not only after 12 hours but actually after 4 hours.

And just to show you how much inertia there is in the current scientific paradigm, [Science magazine] said that this was not interesting. It will come out in PNAS in another 2 months.". (aside: some interesting comments on this work here, including a reference to the PNAS paper). Markram continued: So we do these recordings, and we puff glutamate now [into the circuit] - we actually activate the circuit. We can't still put intelligent stimulus, but we activate the circuit. And when you activate the circuit, here you can see that you have connections appearing and disappearing. This is potentially the substrate that Nobelist Gerry Edelman could use in all kinds of restructuring of the circuitry. Over a 4 hour period you can still see the circuitry is dynamically rewiring. For 50 years we've studied only how synapses are getting stronger and weaker, not how the circuit restructures itself.

This, to me, is one of the most important "forward looking" things Markram focused on during the talk, because it goes beyond the idea of modeling the brain as a static 3D electrical network made up of ion channels and opens the door to the idea that the protein synthesis, dendritic spine growth and neuron rewiring that have been observed to happen with real neurons are also important factors in how the brain works. I hope this is a hint of things to come in Phase 2!

A couple of thoughts occurred to me was as I was going through the presentation. One was that what neurons are really designed to do is to precisely send chemical signals to a specific set of other cells. Instead of releasing chemical messenger molecules out into the body where any cell can pick them up, neurons extend long appendages that deliver the chemical messages right to the front door of the cells that it wants to receive the messages. How does it know which cell(s) to grow the appendages towards? One of the ways the body uses to govern how cells grow during development ('morphobiology') is through the use of chemical gradients that trigger genetic transcription factors at certain points along the gradient. I wonder what chemical (or 'electro-chemical?') gradients exist in the space between neurons that could guide this growth?

The second thought occurred to me after seeing the variety of different types of action potentials that the neurons can generate: perhaps the 'neural code' that these bursts of spikes transmit is not simply used to pass on information that has been received by the senses and processed by other regions in the brain - perhaps it is a set of instructions to the cells that this information is being passed on to that help these cells retain and process this information by generating the appropriate set of proteins at exactly the right time - an electrical stimulus to trigger the necessary chemical chain reactions within the cytoplasm of the cell. This would link up the work done by Dr. Fields et. al at the cellular and molecular level with the with work being done at the connectionist / action potential modeling level... Time to fire up Google and see what work has been done in this area!


Popular posts from this blog

Neurotransmitters - molecular messages

You often hear about neurotransmitters in the news and in science magazines in a kind of off-hand way that assumes everyone must surely know what these things are. But, um, what are they, exactly?

From Sandra Ackerman's book "Discovering the Brain": To be recognized as a neurotransmitter, a chemical compound must satisfy six conditions: It must be

synthesized in the neuron, stored there, released in sufficient quantity to bring about some physical effect; when administered experimentally, the compound must demonstrate the same effect that it brings about in living tissue; there must be receptor sites specific to this compound on the postsynaptic membrane, as well as a means for shutting off its effect, either by causing its swift decomposition or by reuptake, absorbing it back into the cell.

OK, well, what about hormones? They're chemical messengers too - how are hormones different from neurotransmitters? A hormone, by definition, is a compound produced by an endocrine…

It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so.

-- Mark Twain

So far, most of the posts in this blog have been focused on building a 'bottom-up' understanding of how the brain works - from how DNA works up to how individual neurons work. Lots of good science to base all of this stuff on. It is difficult to go further 'up the stack' in this way, however.  How do neurons work together to do useful things? How are small-scale networks of neurons structured and how do the neurons interact in order for us to do simple things like rhythmically tap a finger?

Are we there yet?
Every decade or two the scientific community gets wildly optimistic that we will be able to fully understand how cognition works and be able to replicate the process in some non-biological system. It's been named many things over the years - cybernetics, artificial intelligence, computational intelligence, cognitive computing (see for a nice overview).  And yet, with all of the money tha…

Looking at Sound

Lately I've been listening a lot to Kate Bush's album Aerial - beautiful, wonderful stuff. The album cover is interesting too - the 'islands' that are reflected in the water are actually the amplitude envelope of a recording of some birds singing.

This idea of 'looking at sound' in different ways has been something I've really enjoyed exploring over the last several years. To help visualize the harmonics in a piece of music, I wrote a program a while back that analyses the frequency content of a sound waveform and creates a spectrogram (spectrum over time) of it, colour coding the intensity levels of each frequency.

I think I've found the bird song shown on the cover - it's 2:25 from the start of the song 'Aerial'. Here's what its spectrogram looks like:

The parallel contour lines that are stacked one on top of each other are the harmonics of the bird song. (A synthesizer's been added to the recording, which has changed the amplitue…