2021: the review of the year in neuroscience

Welcome back my friends, to the show that never ends

Mark Humphries
The Spike
Published in
11 min readDec 21, 2021

--

Credit: Pixabay.

It’s the most wonderful time of the year. Chiefly because neuroscience stops churning out new stuff and we get to draw breath before plunging anew into the maelstrom. A chance to take a look back at the year in neuroscience, and wonder how the hell so much startling research was published in the face of a global pandemic.

So welcome one and all to the sixth annual review of the year in neuroscience from The Spike. Here in no particular order are 5 startling things you may have missed this year.

1. Neurons that steal
2. Great news for fMRI: When activity and wiring agree
3. Not so great news for fMRI: Hard limits on what it can see
4. A good chance we’ve got Parkinson’s disease wrong
5. Mice are damn smart

1/ Neurons that steal
“Hello, police? I’d like to report an arson attack: there’s a smouldering pile of ashes in the Neuroscience Hall of Fame where the Dale’s Law exhibit used to be”.

There are two versions of Dale’s Law, and both of them are wrong. The strongest version is that a neuron releases only one type of neurochemical at every synapse it makes. This is very much wrong, as we’ve known for decades that some types of neuron release more than one neurochemical at their synapses. The weaker version is that a neuron releases the same set of neurochemicals at every synapse. This is also wrong, but to its credit in fewer known cases — for example, dopamine neurons can also release glutamate, but at different sites along the same branch of an axon. Still, as a useful rule of thumb, Dale’s Law staggers on, with the comforting thought that at least neurons make the same set of transmitters, even if they are released at different locations.

Dopamine neurons have now ruined what was left of Dale’s Law. Back in 2012, Tritsch, Ding and Sabatini reported that dopamine neurons also release GABA. On the face of it, just one more complication, adding a third transmitter to the party. But there was something not quite right about it — dopamine neurons didn’t seem to have the molecular machinery for moving GABA about, or even making it. Where then was this GABA coming from?

Late this year, Melani and Tritsch reported cracking the case: the axon terminals of dopamine neurons steal GABA from the extracellular space. I’ll say that again: instead of the molecular machinery for making GABA, they have the machinery for taking GABA molecules into their own terminals, and stuffing them into vesicles, ready for release as their very own transmitter. For faking making GABA.

By torching Dale’s Law, this finding also creates an exciting if scary new precedent: there’s a whole new way for neurons to get hold of neurotransmitters — where else in the brain is this happening? That screaming you just heard was a theorist awaking in a cold sweat.

2/ When activity and wiring agree

This year seems to have been a classic good news, bad news scenario for neuroimaging. fMRI is the best way we have of looking into the active mind of a fellow human being. Well, more accurately, the best ethically permissible way: sticking hundreds of electrodes in someone’s brain is perfectly possible, just frowned upon. But the signal fMRI typically measures is not the activity of neurons, but the level of oxygen-rich blood around neurons. We get from blood to neural activity by a chain of reasoning: the more active neurons are, the more energy they need; the more energy they need, the more oxygen they need from the blood. Ergo, more oxygen-rich blood is a proxy for more neural activity. Interpreting the workings of neurons based on blood signals has thus always needed caution and care, but has often rested on untested assumptions.

One of those assumptions is that it is meaningful to study correlations between the blood-oxygen signals in different brain regions — what the fMRI literature calls “functional networks”. Such correlations are often interpreted as some kind of correspondence to the underlying physical connections between those regions.

Here’s the good news: in the fly brain, they correspond pretty well. Turner, Mann and Clandinin mashed together two of the greatest technological strides of modern neuroscience: the synapse-resolution connectome of (half of) the Drosophila brain and the ability to do calcium imaging across its entire brain. They simply asked: do the correlations between the brain-wide (calcium) activity match where the wires go?

The match was surprisingly strong. Looking at 36 regions, they showed a strong match between the strength of correlations defined by the calcium activity, and the strength of physical connections defined by the connectome. Strong, but not perfect, naturally. Some regions had strongly correlated activity even though they were weakly connected, and their correlations could be well-explained by indirect pathways — jumps between multiple, strongly connected regions.

Still, good news, huh? Correlations of neural activity reflect something about physical wiring: one assumption of fMRI research has some support. Now if only someone could show that what fMRI measured was actually something closely related to neural activity, this would be a great result.

Oh wait, they did that too.

3/ Hard limits on brain imaging
You see, the biggest assumption fMRI rests on is that bold claim of blood-oxygen signals being a proxy for neural activity. This year brought some terrific news: it seems they are. In the fly. And some undeniably bad news: they are, but very slowly, too slow to track the correspondence between neural activity and normal behaviour.

And it came from the lab of Thomas Clandinin again. Just like their comparison between the activity and physical networks in the fly brain, above, this work was simple in concept, but devilish in execution.

This time they used whole-brain calcium imaging to capture the neural activity of 54 brain regions. And then in the same 54 brain regions they imaged a sensor that emits light in proportion to the amount of ATP being used — a direct measure of the amount of energy consumed in each brain region. From both sets of signals, they constructed those “functional networks”, the correlations between regions: the correlation of activity from the calcium imaging, and the correlation of metabolism from the ATP sensor. So they could then ask, are these correlations the same? Or, to put it another way, does the stuff that fMRI is thought to measure — energy demand — actually correspond to neural activity?

The answer was yes: the network of correlations between the activity of brain regions was very similar to the network of correlations between the metabolism of brain regions. So similar, even this grumpy sceptic was moderately impressed (for the cognoscenti: the correlation between the correlation matrices was R=0.8). Which implies that, so long as fMRI is measuring energy use, then correlations computed from fMRI activity closely match correlations between neural activity.

A nuance though is that this coupling between activity and metabolism is best at low frequencies. By “low” we mean less than once every 10 seconds (< 0.1 Hz). And this little nuance has some awkward ramifications. It means that metabolism signals can only track changes of behaviour on time-scales of tens of seconds (and even then Mann et al show it does so poorly, with predictions barely above chance and certainly far worse than from neural activity). It seems metabolic activity cannot track rapid — normal — changes in behaviour. The limitation is not the technology, nor flaws in the proposed chain of reasoning from oxygen-rich blood to neural activity, but the source signal itself: the fluctuations in the use of energy are much slower than the changes in behaviour.

There is an upside. Standard fMRI experiments don’t look at changes in behaviour. They either measure your brain while you lie motionless, bored, and a little claustrophobic in the scanner, doing nothing precisely so that the resting state of your brain can be captured. Or they show you a thing or play you a sound in a single trial that lasts seconds, looking only for if and where there was a response in your brain. For all these kinds of experiment, the results of Mann and co are great, more evidence that the fMRI signals these experiments measure really are related to neural activity. It’s a start.

4/ Have we got Parkinson’s disease all wrong?

This one made my head hurt. All together now, repeat the Parkinson’s disease mechanism mantra after me: dopamine neurons are lost from the midbrain; which causes the loss of dopamine in the striatum; which in turn leads to the major motor symptoms of Parkinson’s disease — the slowness of movement, and the inability to initiate new movements. (Though not, notably, tremor, on which there has always been strenuous debate).

A new paper from Jim Surmeier’s lab said: nope. Patricia González-Rodríguez and colleagues developed a mouse model that recaps in detail the sequence of damage to dopamine neurons over time, rather than having those neurons disappear all at once as happens in other mouse models. And this let González-Rodríguez study for the first time exactly how the changes in dopamine neurons map to the symptoms of Parkinson’s disease.

The first surprise: the mouse model says that the neurons don’t die off at first, but their axons stop working instead. Then, yes, because the axons have died, so the release of dopamine into the striatum falls dramatically, but then came the second surprise: the deficits are minor. The mice have difficulty learning new stuff, but not doing stuff. They had no sign of problems with moving. Only when the dopamine neurons stop releasing dopamine from their bodies and dendrites did the problems with movement blow up. Only when dopamine stopped being pumped out around the neurons themselves.

So, no, apparently it is not striatum we should be looking at. Which is a shame as every model of how the loss of dopamine causes Parkinson’s-like symptoms focuses on the striatum. No model of Parkinson’s disease has a role for what dopamine does in and around these midbrain neurons. We might be back to square one on our mantra.

Yet there was a tantalising bit of good news, albeit in mice: even when the full Parkinson’s-like symptoms showed up, most of the dopamine neurons were not dead: they were just dormant, not releasing any dopamine. Could they be restarted? Watch this space.

5/ Mice can be damn smart when they want to be
Mice are dumb. Neuroscience research once used only rats, because rats are bigger, so their brains are easier to record from, and smarter, so they can do more interesting stuff. Mice, on the other hand, are smaller and dumber. It can take them weeks to learn the apparently trivial task of choosing which of two levers gives them water most reliably.

But mice have now taken over neuroscience for the simple reason that they are where all the genetics are. Every fancy new tool — the imaging sensor, the light-activated ion channel — is now expressed by rewriting DNA. And so if you want to use it in a mammal, the mouse is basically your only choice.

This has some serious limitations. It’s awkward when you want to use these fancy tools to study how the brain processes visual information, when mice have such god-awful eyes. And awkward if you want to use them to study complex cognition, when mice seem dumber than a bag of rocks.

Except, maybe, we’ve just been asking them the wrong questions.

In a startling paper in eLife, Markus Meister’s team asked mice to navigate a maze that needed a long sequence of correct choices at each junction to get from the entrance to the water on the other side. This maze, in fact:

The maze-runner. The mouse starts in its home-cage, on the left; if it wants water, it has to enter the maze on the right and find it. Note the maze is transparent, so the mice could be filmed from below. From Rosenberg et al (2021) eLife

A maze with 64 possible end locations, only one of which had water. Frankly, if this was a hedge maze, you’d lose me in there for days. And given that mice take weeks just to correctly learn between two choices, how would they do with long sequences of many choices? Better than me.

All the mice found the water. And fast, within half an hour of their first tentative entrance to the maze. Even better, once they had found the water, they rapidly honed their path to it, taking typically just ten successful trips to perfect the run from the entrance to the water, making the correct decision at every junction. Not convinced yet? Well, in the very first long trip into the maze, when they reached one of those 64 end-points, they returned perfectly to the start of the maze. Perfectly. At the first time of asking.

So it looks much of our behavioural research on mice has been like asking your bathroom scales the time of day — it might seem like a simple question to you, but actually shows you’re a bit of a muppet who hasn’t thought about what scales are designed to do. Perhaps mice weren’t the dumb ones all along.

And 2021 had so much more.

After 2020’s landmark of reconstructing the wires between every neuron in half-a-fly-brain, used in not one but two of the stories above, connectomics shows no signs of abating. This year we’ve had a technology pipeline to scan the whole brain of a rhesus monkey, though as the dataset is 1 petabyte it’s up to some other poor sods to actually work out how to turn that gargantuan data dump into a wiring diagram. And we’ve had connectomics done properly: mapping the wires across different individuals of the same species, and across development, to understand better the inevitable variation in wiring between brains and where it comes from. And, boy, did it vary.

We got to see (most of) the inputs to a single neuron. Rossi, Harris & Carandini spectacularly recording the ensemble of inputs to single neurons in the mouse’s visual cortex, about 125 at a time, and finding that neurons may inherit the angles in the world they prefer from their inputs but not the direction of movement they prefer — so the direction must be computed by them by combining their inputs. Just a few weeks later, Scholl and friends painstakingly combined large-scale imaging and tediously reconstructing the synapses onto single neurons in the ferret visual cortex to show there are no special connections between neurons that like the same things in the world: a neuron’s output just depends on its total input!

And this year brought us the most spurious of spurious correlations ever. I’d previously joked that if you looked hard enough in the cortex of a mouse you could find a neuron that only fires when it turns left on a Tuesday while wearing a fez. For a bit of fun, Guido Meijer went one better: he showed that the activity of fully thirty-five percent of neurons in a mouse’s visual cortex significantly correlate with the fluctuations of crypto-currency prices at the exact same time. And this was after using strong corrections for doing so many comparisons betwixt neuron and crypto. Nonsense, of course. And that was precisely the point: as Meijer noted, this is a great example of Ken Harris’s “nonsense correlations’’, two time-series that both happen to correlate with themselves on similar time-scales, and so appear to have correlations between them. As I said, a spurious correlation: Just because we see one, and it passes a statistical test, doesn’t mean it’s real. Now, where have I seen correlations between activity and something else on the same time-scales before…

Oh, and my book came out this year, a book about how the brain works from the spike’s point of view. Which was nice. Thanks to everyone who bought it, read it, or bought it and read it. And if you liked what you’ve just read, you’ll like the book too.

And with that it’s onwards to 2022, and the inevitable deluge of brain science. See you there!

Twitter: @markdhumphries

For more essays on the brain, follow us at The Spike.

--

--

Mark Humphries
The Spike

Theorist & neuroscientist. Writing at the intersection of neurons, data science, and AI. Author of “The Spike: An Epic Journey Through the Brain in 2.1 Seconds”