Tuesday, February 17, 2015

A new type of mathematics

From TEDxMontreal: http://tedxtalks.ted.com/video/TEDxMontreal-David-Dalrymple-A

John von Neumann: The Computer and the Brain 

Nature article on 2-photon microscopy: Visualizing hippocampal neurons with in vivo two-photon microscopy using a 1030 nm picosecond pulse  (January, 2013 - free online access) by Ryosuke Kawakami, Kazuaki Sawada, Aya Sato, Terumasa Hibi, Yuichi Kozawa, Shunichi Sato, Hiroyuki Yokoyama & Tomomi Nemoto

- David Dalrymple's antidisciplinary, non-institutional science and technology project for digital replication of the functionality (“mind”) of simple nervous systems (“brain”)

Thursday, December 20, 2012

It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so.

-- Mark Twain

So far, most of the posts in this blog have been focused on building a 'bottom-up' understanding of how the brain works - from how DNA works up to how individual neurons work. Lots of good science to base all of this stuff on. It is difficult to go further 'up the stack' in this way, however.  How do neurons work together to do useful things? How are small-scale networks of neurons structured and how do the neurons interact in order for us to do simple things like rhythmically tap a finger?
Are we there yet?
Every decade or two the scientific community gets wildly optimistic that we will be able to fully understand how cognition works and be able to replicate the process in some non-biological system. It's been named many things over the years - cybernetics, artificial intelligence, computational intelligence, cognitive computing (see http://en.wikipedia.org/wiki/Artificial_intelligence for a nice overview).  And yet, with all of the money that has been poured into this research, we still don't have enough information to build a working model of the few hundred neurons in a worm.(Some great work IS going on in this direction, however - see the Open Worm project.)  

Part of the problem is due to sheer complexity - each neuron can have connections that number in the 100's to 10,000s. Part of it is also due to a number of "things we know for sure that just ain't so." This post will take a step back and try to outline a number of "things we've known for sure" for quite a while that, it turns out, "just ain't so".

The things we take for granted...
As you go through the literature on cognition, you come across a number of fundamental assumptions that are either just taken for granted or have been extensively popularized:
  • neurons are the only cells that are involved in cognition.
  • neurons are connected together in a static, unchanging way.
  • emotions are localized within the limbic system.
  • the early visual cortex contains only 'feedforward' connections and acts as a simple filter bank for image processing
  • neurons work the same way in an anesthetized brain as they do in a behaving brain 
Every now and then, however, you come across some ground-breaking work that takes a fresh look at what is REALLY happening.   Lets take a fresh look at some of these pre-conceived notions, and at how science in general got stuck on them.

Re: Neurons are the only cells that are involved in cognition.
From the excellent book "The Other Brain" - R. Douglas Fields
"How will understanding glia change our understanding of the mind?  Today we know that glia constitute another brain that was ignored for a century or more, a brain new to science.  There, all along, the other brain was simply overlooked.  Why?

To begin with, the wrong tools were used to explore it.  The electrodes of neuroscientists are deaf to glial communication.  Yet the glial brain was indeed communicating; it just works differently from the neuronal brain, communicating in different ways and on different time scales.  But the lack of tools is not the complete answer to why neuroscientists missed half the brain until now.
It was our thinking that failed us.  We thought we knew how the brain worked.  Dazzled by the electric neuron, neuroscientists tightened their foucs intensely on this one cell type, virtually ignoring all others even though the other cells are superior in number and diversity to neurons.  Our unconscious biases clouded our perception.  The glial brain simply went unseen.
Understandably, research on the "unimportant" cells did not fare well in the fierce competition for precious funds doled out by government committees to support scientific research.  Findings on the "unimportant" cells also lacked the "significance" required for publication in the mainstream journals.
  Suddenly this siutation has changed.  We are experiencing a scientific revolution sparked by a revelation:  We now know that the other brain [glia] works independently but cooperatively with the neuronal brain.
The rapid "within an eye blink" functions of our nervous system are actually a narrow slice of cognition.  Many brain functions develop and operate slowly.  Emotions and feelings, cycles of attention, cognitive changes with growth and aging, acquisition of complex skills like playing the guitar operate over time scales where glia excel and control neuronal function.  These slowly changing aspects of brain function are relatively unexplored.  Some would argue that these are the most interesting aspects of the mind.

  Our artificial conceptual division separating the other brain from the neuronal brain is eroding, and as it dissolves, we are recognizing a new brain. The links from glia to disease are obvious: seizure, infection, stroke, neurodegenerative disease, cancer, demylelinating disease, and mental illness all involve many different types of glia, but glia regulate and remodel the brain in health as well as in sickness.  Here the questions that are central to research on the neuronal brain are only now being asked of the other brain.  How plastic are glia?  Do they learn, sleep, age, differ in males and females, become impaired by disease?  How many different kinds of glia are there?
   Astrocytes also cover enormous territories in the brain.  An oligodendrocyte ensheathes scores of axons.  Microglia move at will through large regions of the brain.  a single astrocyte can engulf 100,000 synapses.  It seems unliely that one astrocyte is monitoring and dictating the transmission of information individually across the thousands of synapses it surveys.  A more likely possibility is that astrocytes (and other glia) couple large groups of synapses or neurons into functional groups.  This would vastly increase the power and flexibility of information processing in our brains beyond the simple changes in strength of individual synapses along a neural circuit.  Glia give the brain a new dimension of information porocessing.

     The physical dimensons and also the mechanism of communication suggest that glia cover an enormous domain of operation.  The chemical means of cell-cell communication used by glia diffuses widely and across the hardwired lines of neuronal connections.  These features equip glia to control information processing in the brain on a fundamentally different and more global scale than the point-to-point synaptic contacts of neurons.  Such higher level oversight is likely to have significant implications for information processing and cognition.

Re: Neurons are connected together in a static, unchanging way.
From an aside Henry Markram, EPFL/BlueBrain made on "Microcircuit plasticity" 48 minutes into his presentation The Emergence of Intelligence in the Neocortical Microcircuit (video):
"We patched 6 cells, and we see how they're connected so we can define the circuit [they make]. Now we take the pipettes out and we wait 12 hours, and we re-patch it. And what we found is that the circuit was different. Not only after 12 hours but actually after 4 hours. And just to show you how much inertia there is in the current scientific paradigm, [Science magazine] said that this was not interesting. It will come out in PNAS [Proceedings of the National Academy of Sciences] in another 2 months.". (aside: some interesting comments on this work here, including a reference to the PNAS paper). Markram continued: So we do these recordings, and we puff glutamate now [into the circuit] - we actually activate the circuit. We can't still put intelligent stimulus, but we activate the circuit. And when you activate the circuit, here you can see that you have connections appearing and disappearing. This is potentially the substrate that Nobelist Gerry Edelman could use in all kinds of restructuring of the circuitry. Over a 4 hour period you can still see the circuitry is dynamically rewiring. For 50 years we've studied only how synapses are getting stronger and weaker, not how the circuit restructures itself.


A somewhat different'rewiring' process involving dendritic spines has also been observed in living, behaving mice. From the HHMI home page for Scientist Karel Svoboda: Karel Svoboda builds windows into the brain—literally, with tiny glass slides he places in the skulls of mice. He peers inside with sophisticated microscopes and watches individual neurons.

"What we've discovered is that new experiences spur new connections in the adult brain," he says. "And that's a mechanism for learning and memory."

While researchers already knew that the adult brain can reorganize in response to new experiences, until Svoboda developed his techniques, no one had seen the process in action.

"We follow individual synapses—the tiny junctions between neurons—day-by-day for a month or more to see if and how new connections form."

Svoboda, who prior to his move to Janelia was an HHMI investigator at Cold Spring Harbor Laboratory, has devised techniques so precise that he can count the number of calcium channels on the tiny spines that reach between neurons to form synapses. Calcium channels can trigger a series of chemical events that ultimately rewires the synapse. Watching the channels' openings and closings provides direct evidence of such activity.

"It's a very powerful technique that can look deep into the brain without disturbing it," he says. "There is something like a hundred billion synapses in the mouse brain and now we have some tricks to locate the same synapse each time we put the mouse under the microscope. It took a while to figure out, but now it's pretty routine."

In a related article, Svoboda provides some of the results that were observed:
"Our first observations of the large-scale structure of neurons, their axons and dendrites, revealed that they were remarkably stable over a month." Dendrites and axons are highly branched structures, where dendrites are the input side of neurons and axons the output side.

"However, when we zoomed in closer, we found that some spines on dendrites appeared and disappeared from day to day," said Svoboda. These spines stipple the surface of dendrites, like twigs from a branch, and form the receiving ends of synapses, which are the junctions between neurons where neurotransmitters are released.

"This finding was quite unexpected, because the traditional view of neural development has been that when animals mature, the formation of synapses ceases, which is indicated by stable synaptic densities," said Svoboda. "However, the flaw in this view has been that a stable density only indicates a balanced rate of birth and death of synapses. It doesn't imply the absence of the formation of new synapses, but it was often interpreted that way...

The researchers also explored whether sensory experiences could affect the turnover of spines. In this set of experiments, they trimmed individual whiskers from the mice, forcing them to experience their environment with a subset of whiskers. This manipulation expands the representation of the intact whiskers at the expense of trimmed whiskers. There was a dramatic effect on spine turn-over.

"We found in these animals that there was a pronounced increase in the rate of birth and death of these synapses, as evidenced by increased turnover of spines," said Svoboda. "This finding indicates that there's a pronounced rewiring of the synaptic circuitry, with the formation of new synapses and the elimination of other synapses," he said...

In one set of experiments, Svoboda's team trimmed the whiskers of their mice. As the mice explored their environment, Svoboda saw "pronounced rewiring." New synapses formed and others disappeared in the part of the brain that receives input from the whiskers.

Re: Emotions are localized within the limbic system:
From the (excellent) book "The Emotional Brain - The Mysterious Underpinnings of Emotional Life" by Joseph LeDoux: The limbic system theory [developed by Paul MacLean in 1952] was a theory of localization. It proposed to tell us where emotion lives in the brain. But MacLean and later enthusiasts of the limbic system have not managed to give us a good way of identifying what parts of the brain actually make up the limbic system.
    MacLean said that the limbic system is made up of phylogenetically old cortex and anatomically related subcortical areas. Phylogenetically old cortex is cortex that was present in very old (in an evolutionary sense) animals. Although these animals are long gone, their distal progeny are around and we dcan look in the brains of living fish, amphibians, birds and reptiles and see what kinds of cortical areas they have and compare these to the kinds of areas that are present in newly evolved creatures - humans and other mammals. When anatomists did this early in the 1900's, they concluded that the lowly animals only have the medial (old) cortex, but mammals have both the medial and lateral (new) cortex.
    This kind of evolutionary neurologic carried the day for a long time, and it was perfectly reasonable for Herrick, Papez, MacLean, and many others to latch on to i. But, by the early 1970s, this view had begun to crumble. Anatomists like Harvey Karten and Glenn Northcutt were showing that so-called primitive creatures do in fact have areas that meet the structural and functional criteria of neurocortex. What had been confusing was that these cortical areas were not exactly in the place that they are in mammals so it was not obvious that they were the same structures. As a result of these discoveries, it is no longer possible to say that some parts of the mammalian cortex were older than other parts. And once the distinction betwen old and new cortex breaks down, the whole concept of mammalian brain evolution is turned on its head. As a result, the evolutionary basis of the limbic lobe, rhinencephalon, visceral brain and limbic system concepts has become suspect.
    Another idea was that the limbic system might be defined on the basis of connectivity with the hypothalamus. After all, this is what led MacLean to the medial cortex in the first place. But with newer, more refined methods, it has been shown that the hypothalamus is connected with all levels of the nervous system, including the neocortex. Connectivity with the hypothalamus turns the limbic system into the entire brain, which doesn't help us very much.
  MacLean also proposed that areas of the limbic system be identified on the basis of their involvement in visceral functions. While it is true that some areas traditionally included in the limbic system contribute to the control of the autonomic nervous system, other areas, like the hippocampus, are now believed to have relatively less involvement in autonomic and emotional functions than in cognition. And other areas not included in the limbic system by anyone (especially areas in the lower brain stem) are primarily involved in autonomic regulation. Visceral regulation ins a poor basis for identifying the limbic system.
    Involvement in emotional functions is, obviously, another way the limbic system has been looked for. If the limbic system is the emotive system, then studies showing which brain areas are involved in emotion should tell us where the limbic system is. But this is backward reasoning. The goal of the limbic system theory was to tell us where emotion is in the brain on the basis of knowing something about the evolution of brain structure. To use research on emotion to find the limbic system turns this criterion around. Research on emotion can tell us where the emotion system is in the brain, but not where the limbic system is. Either the limbic system exists or it does not. Since there are not independent criteria for telling us where it is, I have to say that it does not exist.
    But lets consider the issue of using research on emotion to define the limbic system a little further. MacLean had proposed that the limbic system was the kind of system that would be involved in primitive emotional functions and not in higher thought processes. Recent research is very problematic fort this view. For example, damage to the hippocampus and some regions of the Papez circuit, like the mammillary bodies and anterior thalamus, have relatively little consistent effect on emotional functions but produce pronounced disorders of conscious or declarative memory - the ability to know what you did a few minutes ago and to store that information and retrieve it at some later time and to verbally describe what you remember. These were exactly the kinds of processes that MacLean proposed that the visceral brain and limbic system would not be involved with. The relative absence of involvement in emotion and the clear involvement in cognition are major difficulties for the view that the limbic system, however one chooses to define it, is the emotional brain.

    How, then, has the limbic system theory of emotion survived so long if there is so little evidence for its existence or for its involvement in emotion? There are many explanations that one could come up with. Two seem particularly cogent. One is that, though imprecise, the limbic system term is a useful anatomical shorthand for areas located in the no-man's-land between the hypothalamus and the neocortex, the lowest and highest (in structural terms) regions of the forebrain, respectively. But scientists should be precise. The limbic system term, even when used in a shorthand structural sense, is imprecise and has unwarranted functional (emotional) implications. It should be discarded
    Another explanation for the survival of the limbic system theory of emotion is that it is not completely wrong - some limbic areas have been implicated in emotional functions. Given that the limbic system is a tightly packaged concept (though not a tightly organized, well defined system in the brain), evidence that one limbic area is involved in some emotional process has often been generalized to validate the idea that the limbic system as a whole is involved in emotion. And, by the same token, the demonstration that a limbic region is involved in one emotional process is often generalized to all emotional processes. Through these kinds of poorly reasoned associations, involvement of a particular limbic area in a very specific emotional process has tended to substantiate the view that the limbic system is the emotional brain.
  A new approach to the emotional brain is needed.

Re: The early visual cortex contains only 'feedforward' connections and acts as a simple filter bank for image processing
From Computations in the early visual corex - Tai Sing Lee (2003):
In the classical feed-forward, modular view of visual processing, the early visual areas (LGN, V1 and V2) are modules that serve to extract local features, while higher extrastriate areas are responsible for shape inference and invariant object recognition. However, recent findings in primate early visual systems reveal that the computations in the early visual cortex are rather complex and dynamic, as well as interactive and plastic, subject to influence from global context, higher order perceptual inference, task requirement and behavioral experience. The evidence argues that the early visual cortex does not merely participate in the first stage of visual processing, but is involved in many levels of visual computation.
There are also some VERY cool discoveries being made about the role of attention in visual processing, and how attention alters the neural processing of vision. I have a hunch that this concept may apply in other areas like motor control as well. I hope to cover these topics in another post sometime.

Re: Neurons work the same way in anesthetized brains as they do in a behaving brain
From Lamme VAF, Zipser K, Spekreijse H: "Figure-ground activity in primary visual cortex is suppressed by anesthesia." Proc Natl Acad Sci USA 1998, 95:3263-3268 :
Figure–ground-related contextual modulation recorded in V1 from awake and perceiving monkeys is abolished when these animals are anesthetized, whereas RF tuning properties remain unaffected. V1 thus hosts very different types of activity, some of which may not (RF properties) and some of which may very well (contextual modulation) be involved in visual awareness.
Also, see 11:10 mark of David Dalrymple guest lectures in Marvin Minsky's class (http://www.youtube.com/watch?v=xW77lANeJas ), and the book "Beyond Boundaries" by Miguel Nicolelis.


When you look at all of these considerations together, it becomes apparent why the progress towards a working model of cognition has been so very slow. With the new understandings of the brain that are finally coming to light, hopefully the scientific community can get 'unstuck' from the "nice, easy to understand (and get funding for and get published about) but wrong answers" that have bogged down research for the last few decades and start from a newer, fresher perspective that better reflects the way the brain actually works.

Thursday, December 22, 2011

Magnetoreception - a gift from Mars?

I've been finding lately that if you look deeply into just about any aspect of life it quickly becomes fascinating. Like migration, for instance...

The story starts with something called 'magnetotactic bacteria' - bacteria that have DNA that creates tiny magnetite (Fe[sub]3[/sub]O[sub]4[/sub]) particles that can act as tiny compasses...

From Magnetotactic bacteria
Magnetites from magnetotactic bacteria MV-1 are elongated. The elongation adds to the magnetic pull of these tiny compasses and thus helps the bacteria locate sources of food and energy. This team of authors found that the elongation was accomplished by the addition of six faces, shown in red in the figure [above]. "The process of evolution on Earth has driven magnetotactic bacteria to make perfect little bar magnets, which differ strikingly from anything found outside biology," says coauthor Joe Kirschvink

And it turns out that birds, sea turtles and salmon also have these tiny magnetite crystals... From The Physics and Neurobiology of Magnetoreception:

Evidence for a magnetic map in sea turtles.
Juvenile sea turtles establish feeding sites in coastal areas and home back to these sites if displaced. To investigate how turtles navigate to specific sites, juvenile green turtles were captured in their coastal feeding areas near Melbourne Beach, Florida. Each turtle was tethered to an electronic tracking system and placed in a pool of water. The pool was surrounded by a magnetic coil system that could be used to replicate the magnetic fields that exist at two distant sites. Turtles exposed to a magnetic field that exists ~330 km north of their feeding grounds oriented southward, whereas those tested in a field that exists an equivalent distance to the south swam north. Therefore, turtles responded to each field by swimming in the direction that would have led towards the feeding area had they actually been in the locations at which the magnetic fields exist.

The results indicate that sea turtles have a type of ˜magnetic map' that facilitates navigation to specific geographical areas.

And scientists are starting to figure out how this 'magnetic sense' works...
From Homing in on Vertebrates
Joseph L. Kirschvink, Nature - 1997
All known sensory systems have specialized receptor cells designed to respond to the external stimulus, and these are always coupled to neurons to bring this information to the brain.
It took nearly two decades to realize that the geomagnetic compass used by adult birds was programmed to be ignored if other orientation cues (such as a Sun or star compass, polarized skylight, infrasound and ultrasound) were present. These orientation cues constitute a complex but consistent web of interacting responses, which are used not only by birds but in all major vertebrate groups and many invertebrates.

There is some (controversial) evidence that the magnetotactic bacteria that started all of this originated on Mars:
From Magnetite-based magnetoreception
Magnetoreception may well have been among the first sensory systems to evolve, as suggested by the presence of magnetosomes and magnetosome chain structures in the 4.0 billion year old carbonate blebs of the Martian meteorite ALH84001. Although this is nearly half a billion years older than the oldest microbial fossils on Earth, it suggests that this genetic ability was brought here from Mars via the process of panspermia. In terms of the evolutionary arguments presented above, the striking similarity in magnetosome structure and organization in bacteria, protists, and vertebrates, and the deep fossil record, supports the hypothesis that magnetite biomineralization system arose initially in the magnetotactic bacteria and was incorporated into eukaryotic cells through endosymbiosis; later, it may even have been used as a template to drive the widespread biomineralization events during the Cambrian explosion.

Wednesday, December 14, 2011

Diffusion Imaging - Mapping the Connectome

From The Human Connectome Project Is a First-of-its-Kind Map of the Brain's Circuitry:
Working with $30 million and just half a decade, the Human Connectome Project aims to create a first-of-its-kind map of the brain’s complex circuitry, detailing every connection linking thousands of different regions of the brain. ...
The project aims to tap state-of-the-art brain scanning technologies, including diffusion imaging, various MRI methods, and magnetoencephalography to map not just how messages move through the brain, but how various regions work together via networks and networks of networks to achieve the complexity that is the human mind. With map resolutions down to the voxel – small swaths of grey matter containing about one million neurons each – researchers estimate the HCP will generate about one petabyte of data, which will require its own supercomputer to process.

All that scanning, data gathering, and analysis should pay off though, HCP researchers say. The end result will be an open platform that other neuroscientists can use to test their own theories, hypotheses, and findings against. Such a map should help scientists find their way to deeper understandings of how the brain works as well as cures for complicated neurological disorders.

Diffusion Tensor Magnetic Resonance Imaging:

Understanding Diffusion MR Imaging Techniques: From Scalar Diffusion-weighted Imaging to Diffusion Tensor Imaging and Beyond by Patric Hagmann et. al provides a nice overview of how Diffusion Tensor MRI works. Basically, MRI is used to detect the displacement distribution (a.k.a. diffusion) of water molecules along the 'pipes' formed by axons in the brain. Experimental evidence suggests that the tissue component predominantly responsible for the anisotropy of molecular diffusion observed in white matter is not myelin, as one might expect, but rather the cell membrane. The degree of myelination of the individual axons and the density of cellular packing seem merely to modulate anisotropy. Furthermore, axonal transport, microtubules, and neurofilaments appear to play only a minor role in anisotropy measured at MR imaging. In a conventional MRI, every 3D position is assigned a grey-level value, whereas Diffusion Tensor MRI assigns it a 3D image that encodes the molecular displacement distribution.

Hubs and Networks
Some interesting findings are already starting to be discovered using this technology. From Study: A rich club in the human brain:

"We've known for a while that the brain has some regions that are 'rich' in the sense of being highly connected to many other parts of the brain," said Olaf Sporns, professor in the Department of Psychological and Brain Sciences in IU's College of Arts and Sciences. "It now turns out that these regions are not only individually rich, they are forming a 'rich club.' They are strongly linked to each other, exchanging information and collaborating."

The study, "Rich-Club Organization of the Human Connectome," is published in the Nov. 2 issue of the Journal of Neuroscience. The research is part of an ongoing intensive effort to map the intricate networks of the human brain, casting the brain as an integrated dynamic system rather than a set of individual regions.

Using diffusion imaging, which is a form of MRI, Martijn van den Heuvel, a professor at the Rudolf Magnus Institute of Neuroscience at University Medical Center Utrecht, and Sporns examined the brains of 21 healthy men and women and mapped their large-scale network connectivity. They found a group of 12 strongly interconnected bihemispheric hub regions, comprising the precuneus, superior frontal and superior parietal cortex, as well as the subcortical hippocampus, putamen and thalamus. Together, these regions form the brain's "rich club."

Most of these areas are engaged in a wide range of complex behavioral and cognitive tasks, rather than more specialized processing such as vision and motor control. If the brain network involving the rich club is disrupted or damaged, said Sporns, the negative impact would likely be disproportionate because of its central position in the network and the number of connections it contains. By contrast, damage to regions outside of the rich club would likely cause specific impairments but would likely have little influence on the global flow of information throughout the brain.

Sporns said the cohesive nature of the rich club's interconnections was surprising and unexpected. It would not have been implausible to have highly connected nodes that did not interact or influence each other to the same degree.

"It's a group of highly influential regions that keep each other informed and likely collaborate on issues that concern whole brain functioning," he said.

Connectivity vs. Functionality
One of the things I find both annoying and almost funny is the marketing language that is being used for supercomputing simulations that try to equate the number of computations per second a supercomputer can make to an 'equivalent' level of neurobiology. IBM says that they can apparently do 'cat-scale' simulations. This, in spite of the fact that we don't fully understand how even the simplest neural networks work at a detailed level. Neurobiologist Henry Markram has gone as far as calling the IBM Cat Scale Brain Simulation a Hoax.

So it's important to take a step back and look at how this work fits into the larger scheme of things... From a comment made by Olaf Sporn in Brian Science Podcast interview with Olaf Sporn:
I think it would be simple-minded to reduce the brain to a wiring diagram. That’s certainly not my intention, and I think it would be simple-minded if one were to propose that. You mentioned the worm, C. elegans, earlier. It has about 300 neurons—something like that—fairly stereotypically connected to each other. And we’ve known that particular wiring diagram now for 25 years, as a result of the heroic efforts of researchers who reconstructed this meticulously in the early ‘80s. But we still don’t really understand how the nervous system of C. elegans works in its entirety.

So, it is something that we need to know—sort of like the genome. We really do want that information. But it doesn’t fully explain the functioning of the organism or of the nervous system; it only gives us a foundation. It’s necessary, but not sufficient.

In addition to mapping connectivity in the brain, Diffusion Tensor Imaging (DTI) is also providing insight into brain injuries such as concussions. From Dr. Randall Benson (quoted in the Brain Damage Blog (Jan 8, 2010): ):

Closed head injuries (non-penetrating) including concussion are caused by sudden acceleration or deceleration of the head which causes local deformations of the brain within the cranium. The anatomical and biomechanical properties of the brain are such that white matter fibers are stretched and damaged, resulting in diffuse axonal injury (DAI) which is the hallmark pathology and accounts for most of the neurological disability in TBI (Traumatic Brain Injury).

The typical cognitive deficits in TBI, i.e., slowed information processing, decreased attention and memory, and psychiatric symptoms are caused by damage to the “cables” which allow for efficient transmission of information between neurons. TBI reduces brain network efficiency resulting in decreased capacity and global functional impairment. Concussive injury such as occurs in football with high speed collisions also causes deformation of brain substance and is felt to account for many of the immediate and delayed symptoms including the post-concussive syndrome. ERP studies of sports related concussion suggest that symptomatic recovery may occur while neurologic and brain metabolic functioning continues to be impaired from weeks to months after injury.

Incurring a second concussion before neurologic recovery has been shown to worsen outcome and may begin a downward spiral culminating in chronic traumatic encephalopathy (CTE) but this is not known. Diffusion tensor imaging (DTI) is able to detect damaged white matter fibers (axons) which have altered flow of water molecules compared with healthy axons.


Check out the Brian Science Podcast interview with Olaf Sporn, which covers the work he has been doing on brain networks.

Networks of the Brain by Olaf Sporns.

Wednesday, June 17, 2009

Junk DNA: "Listen to your junk man - he's singing"

"Listen to your junk man - he's singing ... All dressed up in satin, walking past the alley..." - Bruce Springsteen, New York Serenade

Junk DNA is looking mighty fine lately. Only a few years ago, the non-coding regions of DNA that make up over 95% of the genome were looked upon as the uninteresting desert wastelands between the regions of DNA involved in protein synthesis. How times have changed!

'Junk' DNA not junk but key to complexity

There's a very nice video on Gene Regulation(free) from Science Magazine that discusses the pivotal roles that these non-coding regions of DNA play in our genome.

As John Mattick of The University of Queensland states at the end of the video:"We're just realizing that we've only got to first base and we have a long way to go, and most of the journey forward is going to be dissecting, analyzing and rebuilding an understanding of the massively parallel and extremely sophisticated RNA regulatory circuits, which really do underpin our complexity. And the irony, I think, is that what was dismissed as junk, because it wasn't understood, will turn out to hold the secret of human complexity, including our cognitive complexity. And that's where were going over the next 10 to 15 years.

More info on John Mattick's work is provided by a News in Science article (May 10,2004):The researchers scanned the human, rat and mouse genomes for matching regions of 200 or more DNA base pairs and found 481 regions that were completely unchanged. They then looked at earlier organisms.

"We then looked at the dog and bovine genomes and found that they were preserved there. Amazingly, most of them were preserved in the chicken genome, which has just been released, and about half are preserved in fish," Mattick said. "So that means some of these sequences have remain unchanged during evolution for over 400 million years."

Mattick said that these sequences remained unchanged while protein-coding genes changed slowly through evolution.

"So whatever [these conserved regions] are, and whatever they're doing, evolution is really saying that they're critical to our biology in ways that we don't yet understand."

Mattick said some of the sequences overlapped with protein-coding genes, while some were outside genes. But all were strongly associated with genes involved in controlling development. "They're almost certainly regulatory," he said.

The blog post on RNA interference provides some further details on the role RNA plays in transcriptional gene regulation. Additional info on RNA splicing in dendrites is provided in the blog post on dendritic spines. But there's a lot more going on here...

For one thing, function specific proteins can be 'stockpiled' in 'cytoplasmic granules', as well as sent to these granules for destruction. From The Scientist - A New View of Translational Control (Dec. 5, 2005): "Researchers are rapidly uncovering so-called granules in the cytoplasm that cluster function-specific proteins for RNA storage, silencing, reuse, destruction, and perhaps even splicing. Apparently related to the well characterized maternal mRNA granules that jumpstart embryogenesis, these neighborhood processing centers serve important functions in adult cells, including shaping synaptic plasticity and responding to stress.
"I think we'll learn that how cells control the destruction and translation of messenger RNAs through these structures will be a fundamental part of the control of genetic expression," says Roy Parker at the University of Arizona in Tucson. In the past two years, Parker has found cytoplasmic structures containing mRNA decapping and degradation enzymes. These compartments first appeared to serve as an mRNA junkyard: Transcripts with shortened poly(A) tails, or those otherwise no longer needed were relegated here for destruction. Parker dubbed them processing bodies, or P-bodies.
"It makes sense to have compartments for degradation. It's not just RNA randomly floating around with an enzyme happening to find it," says Keith Blackwell at Joslin Diabetes Center in Boston. But P-bodies may be more than just centralized paper shredders; they may store mRNA for later use. In September, when Parker and colleagues blocked translation in yeast cells by depriving them of glucose, the number of free-floating ribosome complexes known as polysomes decreased, and P-bodies grew in size as mRNAs went to them.5 But instead of being degraded, mRNAs accumulated. When glucose was restored, P-body size decreased and polysome number rose, suggesting that mRNAs were getting reused for translation. Reusing old mRNAs is likely more efficient and faster than making new ones, says John Rossi at the Beckman Research Institute of the City of Hope in Duarte, Calif.
In neurons, mRNA granules seem to influence synaptic plasticity (the variability in a synapse's signal strength), which appears fundamental to memory formation and learning. Kosik and colleagues found that granules store translationally silent mRNAs in dendrites. When the cell is depolarized, Kosik hypothesizes that the granules release their mRNAs to polysomes, resulting in localized protein changes. "They make sure that translation is directed to specific locations and not in the wrong place," he explains. The importance of such systems is hard to predict, Kosik says: "We could be talking about a branch of biology as extensive and intricate as the study of how proteins are directed to their destinations." Parker notes that neuronal and maternal granules have proteins in common and says he's looking to see if neuronal granules also possess P-body proteins.

More recently, James H. Eberwine of U.Penn reports in his web page: We have shown that multiple mRNAs are localized in neuronal dendrites and have provided a formal proof of local mRNA translation in dendrites. Further, we have recently shown that the intracellular sites of localization and translation of these mRNAs can be altered by synaptic stimulation highlighting for the first time that in vivo translation of a mRNA can occur at different rates in distinct regions of a single cell (translation is primarily exponential in dendrites and linear in the cell soma).

More info on the work of Eberwine and colleagues is described in an article in The Medical News: RNA-associated introns guide nerve-cell channel production:In nerve cells, some ion channels are located in the dendrite, which branch from the cell body of the neuron. Dendrites detect the electrical and chemical signals transmitted to the neuron by the axons of other neurons. Abnormalities in the dendrite electrical channel are involved in epilepsy, neurodegenerative diseases, and cognitive disorders, among others.

Introns are commonly looked on as sequences of "junk" DNA found in the middle of gene sequences, which after being made in RNA are simply excised in the nucleus before the messenger RNA is transported to the cytoplasm and translated into a protein. In 2005, the Penn group first found that dendrites have the capacity to splice messenger RNA, a process once believed to only take place in the nucleus of cells.

Now, in the current study, the group has found that an RNA encoding for a nerve-cell electrical channel, called the BK channel, contains an intron that is present outside the nucleus. This intron plays an important role in ensuring that functional BK channels are made in the appropriate place in the cell.

When this intron-containing RNA was knocked out, leaving the maturely spliced RNA in the cell, the electrical properties of the cell became abnormal. “We think the intron-containing mRNA is targeted to the dendrite where it is spliced into the channel protein and inserted locally into the region of the dendrite called the dendritic spine. The dendritic spine is where a majority of axons from other cells touch a particular neuron to facilitate neuronal communication” says Eberwine. “This is the first evidence that an intron-containing RNA outside of the nucleus serves a critical cellular function.”

“The intron acts like a guide or gatekeeper,” says Eberwine. “It keys the messenger RNA to the dendrite for local control of gene expression and final removal of the intron before the channel protein is made. Just because the intron is not in the final channel protein doesn't mean that it doesn't have an important purpose.”

The group surmises that the intron may control how many mRNAs are brought to the dendrite and translated into functional channel proteins. The correct number of channels is just as important for electrical impulses as having a properly formed channel.

The investigators believe that this is a general mechanism for the regulation of cytoplasmic RNAs in neurons. Given the central role of dendrites in various physiological functions they hope to relate this new knowledge to understanding the molecular underpinnings of memory and learning, as well as components of cognitive dysfunction resulting from neurological disease.

So it really seems that each dendrite is remarkably self-contained, with its own mitochondrial energy supply, the ability to synthesize proteins and the ability to wharehouse proteins. And that the dendritic machinery can be dynamically reconfigured by the neuron based on synaptic activity - the mitochondria and the mRNA localization and translation sites can move from quiescent dendrites to active ones on demand.

Junk DNA has other secrets that are being discovered, as well - for example, RNA-guided mechanisms underlying genome rearrangement. From a recent article in ScienceDaily (May 21, 2009): Laura Landweber and other members of her team are researching the origin and evolution of genes and genome rearrangement, with particular focus on Oxytricha because it undergoes massive genome reorganization during development.

In her lab, Landweber studies the evolutionary origin of novel genetic systems such as Oxytricha's. By combining molecular, evolutionary, theoretical and synthetic biology, Landweber and colleagues last year discovered an RNA (ribonucleic acid)-guided mechanism underlying its complex genome rearrangements.

"Last year, we found the instruction book for how to put this genome back together again -- the instruction set comes in the form of RNA that is passed briefly from parent to offspring and these maternal RNAs provide templates for the rearrangement process," Landweber said. "Now we've been studying the actual machinery involved in the process of cutting and splicing tremendous amounts of DNA. Transposons are very good at that."
They have concluded that the genes spur an almost acrobatic rearrangement of the entire genome that is necessary for the organism to grow.

It all happens very quickly. Genes called transposons in the single-celled pond-dwelling organism Oxytricha produce cell proteins known as transposases. During development, the transposons appear to first influence hundreds of thousands of DNA pieces to regroup. Then, when no longer needed, the organism cleverly erases the transposases from its genetic material, paring its genome to a slim 5 percent of its original load.

"The transposons actually perform a central role for the cell," said Laura Landweber, a professor of ecology and evolutionary biology at Princeton and an author of the study. "They stitch together the genes in working form." The work appeared in the May 15 edition of Science.

Listen to your junk man - he's singing!

Monday, May 11, 2009

Connecting the dots... "Let us begin anew"

As I've learned more about bio-systems, starting from water molecules and working up to synapses and networks of neurons, I've come to appreciate how incredibly powerful and compact the molecular computing substrate that life is built on top of is. Our most powerful supercomputers take days to calculate how one protein molecule folds, when the simplest bacteria can perform millions of these operations in parallel in seconds. What these simulations give us, however, is insight into exactly what special characteristics each of the proteins has in all of the various shapes it can assume. Building up from this low level understanding, hopefully we will be able to understand what the larger-scale purpose is for each of the various signaling chains and genetic transcriptions that are taking place, and perhaps we may one day be able to model these complex molecular interactions using state machines and logic that allows us to achieve a functionally equivalent set of operations without having to precisely simulate cells at the molecular level.

There are a number of new approaches to try to get to this level of understanding.
On the BrainScience podcast mentioned in the previous post, Seth Grant provided some nice descriptions of the difference and connections between the "trendy" terms "genetics", "genomics" and "proteomics":
Genetics is the study of gene function or the function of the biology as revealed by genes, and typically involves the study of cells or animals where there has been a mutation or an abnormality introduced into a gene and as a result of that, the function of the cell or animal is changed. And, of course, the readers will understand this, but a mutation in a gene effectively means a change in the DNA sequence that encodes that gene.
Genomics is a different thing. Genomics is the study of the organization of all of the DNA or the 'genome'. And, of course, the genome encodes roughly 20,000 genes in mammalian systems, and therefore, when one is studying the genomics of man or mouse, we're studying all of the genes. Typically in genetics you might only study one gene at a time in many cases. So that gives you a sense of the difference between the large scale features of genomics and the somewhat small scale features of genetics.
Proteomics is the study of the sets of proteins, or all of the proteins that perform biological functions or are found in cells or tissues. "Proteome" is to proteins what "genome" is to genes. Again, proteome is dealing with large sets of molecules. In our case, we were particularly interested in the 'proteome' (or all of the proteins) found in synapses. But you might be interested in all of the 'proteome' of red blood cells, in other words, all of the proteins that are found in a red blood cell.

There's a very good paper called "The Many Facets of Natural Computing" that looks at some of the interaction networks that are active in biological systems. The paper was written by
  • Lila Kari, Department of Computer Science, University of Western Ontario, London, ON, N6A 5B7, Canada, lila@csd.uwo.ca

  • Grzegorz Rozenberg, Leiden Inst. of Advanced Computer Science, Leiden University, Niels Bohrweg 1, 2333 CA Leiden, The Netherlands, Department of Computer Science, University of Colorado at Boulder, Boulder, CO 80309, USA, rozenber@liacs.nl

  • Their copyright notice:
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

    [A]t the cell level, scientific research on organic components has focused strongly on four different interdependent interaction networks, based on four different “biochemical toolkits”: nucleic acids (DNA and RNA), proteins, lipids, carbohydrates, and their building blocks.

    The genome consists of DNA sequences, some of which are genes that can be transcribed into messenger RNA (mRNA), and then translated into proteins according to the genetic code that maps 3-letter DNA segments into amino acids. A protein is a sequence over the 20-letter alphabet of amino acids. Each gene is associated with other DNA segments (promoters, enhancers, or silencers) that act as binding sites for proteins which activate or repress the gene’s transcription. Genes interact with each other indirectly, either through their gene products (mRNA, proteins) which can act as transcription factors to regulate gene transcription – either as activators or repressors –, or through small RNA species that directly regulate genes.

    These gene-gene interactions, together with the genes’ interactions with other substances in the cell, form the most basic interaction network of an organism, the gene regulatory network. Gene regulatory networks perform information processing tasks within the cell, including the assembly and maintenance of the other networks. Research into modeling gene regulatory networks includes qualitative models such as random and probabilistic Boolean networks, asynchronous automata, and network motifs.(ref.)
    Proteins and their interactions form another interaction network in a cell, that of biochemical networks, which perform all mechanical and metabolic tasks inside a cell. Proteins are folded-up strings of amino acids that take three-dimensional shapes, with possible characteristic interaction sites accessible to other molecules. If the binding of interaction sites is energetically favourable, two or more proteins may specifically bind to each other to form a dynamic protein complex by a process called complexation. A protein complex may act as a catalyst by bringing together other compounds and facilitating chemical reactions between them. Proteins may also chemically modify each other by attaching or removing modifying groups, such as phosphate groups, at specific sites. Each such modification may reveal new interaction surfaces.

    There are tens of thousands of proteins in a cell. At any given moment, each of them has certain available binding sites (which means that they can bind to other proteins, DNA, or membranes), and each of them has modifying groups at specific sites either present or absent. Protein-protein interaction networks are large and complex, and finding a language to describe them is a difficult task. A significant progress in this direction was made by the introduction of Kohn-maps, a graphical notation that resulted in succinct pictures depicting molecular interactions. Other approaches include the textual biocalculus, or the recent use of existing process calculi (π-calculus), enriched with stochastic features, as the language to describe chemical interactions. (ref.)

    Yet another biological interaction network, and the last that we discuss here, is that of transport networks mediated by lipid membranes. Some lipids can self-assemble into membranes and contribute to the separation and transport of substances, forming transport networks. A biological membrane is more than a container: it consists of a lipid bilayer in which proteins and other molecules, such as glycolipids, are embedded. The membrane structural components, as well as the embedded proteins or glycolipids, can travel along this lipid bilayer. Proteins can interact with free-floating molecules, and some of these interactions trigger signal transduction pathways, leading to gene transcription. Basic operations of membranes include fusion of two membranes into one, and fission of a membrane into two. Other operations involve transport, for example transporting an object to an interior compartment where it can be degraded. Formalisms that depict the transport networks are few, and include membrane systems described earlier, and brane calculi.

    The gene regulatory networks, the protein-protein interaction networks, and the transport networks are all interlinked and interdependent. Genes code for proteins which, in turn, can regulate the transcription of other genes, membranes are separators but also embed active proteins in their surfaces. Currently there is no single formal general framework and notation able to describe all these networks and their interactions. Process calculus has been proposed for this purpose, but a generally accepted common language to describe these biological phenomena is still to be developed and universally accepted. It is indeed believed that one of the possible contributions of computer science to biology could be the development of a suitable language to accurately and succinctly describe, and reason about, biological concepts and phenomena.

    One of the problems that happens in science is that, in order to understand things deeply, scientists typically need to specialize in one specific area of research. As Daphne Koller, a professor of computer science at Stanford University, relates in an interview about her being awarded the first-ever ACM-Infosyst Foundation Award in Computing Sciences(ref.):
    The world is very complex: people interact with other people as well as with objects and places. If you want to describe what’s going on, you have to think about networks of things that interact with one another. We’ve found that by opening the lens a little wider and thinking not just about a single object but about everything to which it’s tied, you can reach much more informed conclusions.

    [Interviewer]Which was an insight you brought to the field of artificial intelligence…
    Well, I wasn’t the only one involved. There had been two almost opposing threads of work in artificial intelligence: there were the traditional AI folks, who grew up on the idea of logic as the most expressive language for representing the complexities of our world. On the other side were people who came in from the cognitive reasoning and machine learning side, who said, “Look, the world is noisy and messy, and we need to somehow deal with the fact that we don’t know things with certainty.” And they were both right, and they both had important points to make, and that’s why they kept arguing with each other.

    How did probabilistic relational modeling help settle the dispute?
    The synthesis of logic and probability allows you to learn this type of holistic representation [of complex systems] from real-world data. It gives you the ability to learn higher-level patterns that talk about the relationships between different individuals in a reusable way.

    You’ve begun applying your techniques to the field of biology.
    Originally, it was a method in search of a problem. I had this technology that integrated logic and probability, and we had done a lot of work on understanding the patterns that underlay complex data sets. Initially, we were looking for rich data sets to motivate our work. But I quickly became interested in the problem in and of itself.

    What problem is that?
    Biology is undergoing a transition from a purely experimental science — where one studies small pieces of the system in a very hypothesis-driven way — to a field where enormous amounts of data about an entire cellular system can be collected in a matter of weeks. So we’ve got millions of data points that are telling us very important insights, and we have no idea how to get at them.

    What have you learned about interdisciplinary collaboration from your work with biologists?
    The important thing is to set up a collaborative effort where each side respects
    the skills, insights, and evaluation criteria of the other. For biologists
    to care about what you build, you need to convince them that it actually produces good biology. You have to train yourself to understand what things they care about, and at the same time you can train them in the methods of your community.

    So it’s not just learning a new scientific language, but training yourself to respect a different research process.
    It’s a question of finding people who are capable of learning enough of the other side’s language to make the collaboration productive.

    This sentiment is echoed in numerous papers I've come across, as well as in the poetic conclusion of "The Many Facets of Natural Computing":
    In these times brimming with excitement, our task is nothing less than to discover a new, broader, notion of computation, and to understand the world around us in terms of information processing.

    Let us step up to this challenge. Let us befriend our fellow the biologist, our fellow the chemist, our fellow the physicist, and let us together explore this new world. Let us, as computers in the future will, embrace uncertainty. Let us dare to ask afresh: “What is computation?”, “What is complexity?”, “What are the axioms that define life?”.

    Let us relax our hardened ways of thinking and, with deference to our scientific forebears, let us begin anew.

    Bulletin of the EATCS (2007): Machines of systems biology.
    Nature (Sept. 2002): Cellular abstractions: Cells as computation
    Cambridge University Press (1999): Computing and Mobile Systems - the π-Calculus
    Information Technology in Systems Biology (Kohn Maps)
    Developmental Biology (2007):The regulatory genome and the computer
    Science Signaling (2004):Molecular interaction map of the mammalian cell
    cycle control and DNA repair systems."

    The Calculus of Looping Sequences for Modeling Biological Membranes"
    IEEE (2007): A Uniform Framework of Molecular Interaction for an Artificial Chemistry with Compartments

    Monday, May 04, 2009

    "Once more into the breach, dear friends, once more!"

    The more I read about "Cognitive Computing", the more disenchanted I get with most of the work being done under this banner. There is an awful lot of hype going on here: everything from university researchers that claim how simple it is to create a silicon chip that accurately emulates millions of neurons and projects to create silicon prosthetics for some of the major centers in the brain to overly ambitious claims stating how close we are to getting computers to 'think' and thus to the resulting 'singularity'. Most 'cognitive computing' efforts seem to miss the point that there is more happening here than simple electrical signaling over a network. So coming across the following articles and podcast was like a breath of fresh spring air:

    Complex Synapses Drove Brain Evolution:

    ScienceDaily (June 9, 2008) — One of the great scientific challenges is to understand the design principles and origins of the human brain. New research has shed light on the evolutionary origins of the brain and how it evolved into the remarkably complex structure found in humans.

    The research suggests that it is not size alone that gives more brain power, but that, during evolution, increasingly sophisticated molecular processing of nerve impulses allowed development of animals with more complex behaviours. The study shows that two waves of increased sophistication in the structure of nerve junctions could have been the force that allowed complex brains - including our own - to evolve. The big building blocks evolved before big brains.

    Current thinking suggests that the protein components of nerve connections - called synapses - are similar in most animals from humble worms to humans and that it is increase in the number of synapses in larger animals that allows more sophisticated thought. "Our simple view that 'more nerves' is sufficient to explain 'more brain power' is simply not supported by our study," explained Professor Seth Grant, Head of the Genes to Cognition Programme at the Wellcome Trust Sanger Institute and leader of the project. "Although many studies have looked at the number of neurons, none has looked at the molecular composition of neuron connections. We found dramatic differences in the numbers of proteins in the neuron connections between different species".

    "We studied around 600 proteins that are found in mammalian synapses and were surprised to find that only 50 percent of these are also found in invertebrate synapses, and about 25 percent are in single-cell animals, which obviously don't have a brain." Synapses are the junctions between nerves where electrical signals from one cell are transferred through a series of biochemical switches to the next. However, synapses are not simply soldered joints, but miniprocessors that give the nervous systems the property of learning and memory. Remarkably, the study shows that some of the proteins involved in synapse signalling and learning and memory are found in yeast, where they act to respond to signals from their environment, such as stress due to limited food or temperature change.

    "The set of proteins found in single-cell animals represents the ancient or 'protosynapse' involved with simple behaviours," continues Professor Grant. "This set of proteins was embellished by addition of new proteins with the evolution of invertebrates and vertebrates and this has contributed to the more complex
    behaviours of these animals.

    "The number and complexity of proteins in the synapse first exploded when muticellular animals emerged, some billion years ago. A second wave occurred with the appearance of vertebrates, perhaps 500 million years ago."

    There's an excellent podcast interview with Dr. Seth Grant at BrainScience - episode 51 that covers this work in more depth. Highly recommended!
    The ancestral proteins that are found in unicellular animals are the proteins that are found in more or less all of the different synapses in the brain of the mouse. The most recently evolved proteins - the vertebrate proteins - those are the ones that are most diverse in the brain regions of the mouse. So some of those proteins are very high, for example, in the frontal cortex, others might be high in the hippocampus, others might be high in the cerebellum; in other words, they're very variable like that.

    So what that is telling us, then, and I'm just returning now to that ancient vertebrate synapse that arose before big brains, it tells us that when this 'big synapse' evolved, what the vertebrate brain then did as it grew bigger and evolved afterwards - it exploited the new proteins that had evolved into making new types of neurons in new types of regions of the brain.

    In other words, we would like to put forward the view that the synapse evolution has allowed brain specialization - regionalization - to occur. And we know from many many studies that the regionalization of the brain - there's parts involved with learning, there's parts involved with fear, there's parts involved with some aspect of mood or so on, there's parts involved with motor function - that all appears to be built on the template of molecular evolution of the synapse. "

    Journal References
    Nature Neuroscience, 8 June 2008 Evolutionary expansion and anatomical specialization of synapse proteome complexity.
    Emes RD, Pocklington AJ, Anderson CNG, Bayes A, Collins MO, Vickers CA, Croning MDR,
    Malik BR, Choudhary JS, Armstrong JD and Grant SGN.

    PubMed Abstract:Neurotransmitters drive combinatorial multistate postsynaptic density networks.
    Coba MP, Pocklington AJ, Collins MO, Kopanitsa MV, Uren RT, Swamy S, Croning MD, Choudhary JS, Grant SG.

    The mammalian postsynaptic density (PSD) comprises a complex collection of approximately 1100 proteins. Despite extensive knowledge of individual proteins, the overall organization of the PSD is poorly understood. Here, we define maps of molecular circuitry within the PSD based on phosphorylation of postsynaptic proteins. Activation of a single neurotransmitter receptor, the N-methyl-D-aspartate receptor (NMDAR), changed the phosphorylation status of 127 proteins.

    Stimulation of ionotropic and metabotropic glutamate receptors and dopamine receptors activated overlapping networks with distinct combinatorial phosphorylation signatures. Using peptide array technology, we identified specific phosphorylation motifs and switching mechanisms responsible for the integration of neurotransmitter receptor pathways and their coordination of multiple substrates in these networks. These combinatorial networks confer high information-processing capacity and functional diversity on synapses, and their elucidation may provide new insights into disease mechanisms and new opportunities for drug discovery.