The Recording of the Masses
When trying to read brain signals, especially those recorded in the brains of humans, neuroscientists face a great challenge: The signals picked up by surface probes or imaging methods typically do not originate from individual cells. Instead, they are reflecting the simultaneous activity of large neuronal populations, comprising thousands, perhaps even millions of nerve cells. But if the aim is to interact with the brain and its dynamic activity, e.g. through neurotechnological devices in deep brain stimulation, it is inevitable to understand what happens on the level of neurons and neuronal networks.
Mathematical approaches to unravel how single cell activities are linked to population signals have a long history in science: The use of so-called mean-field models goes back more than forty years, when Hugh R. Wilson and Jack D. Cowan (who just recently celebrated his 80th birthday with a scientific symposium at his honor) developed the first of its kind in 1972. However, this first model and many that followed lacked a crucial feature that made it sometimes hard to use them to interpret biological data directly. These models assumed that infinitely many neurons contribute to the population in question. As a consequence, fluctuations would average out. Obviously, the networks that one deals with in the brain may be large, but they are not of infinite size, and fluctuations need to be taken into account. What makes the problem particularly difficult is that fluctuations appear to depend on the state of the network, assuming larger amplitudes for higher overall activity of the network.
In a research article just published in “Frontiers in Computational Neuroscience”, Fereshteh Lagzi and Stefan Rotter from the Cluster of Excellence BrainLinks-BrainTools and the Bernstein Center at the University of Freiburg present a new approach to model the time-dependent dynamics of populations with finite size. To solve the problem, they used a specific type of Markov process, where neurons are assumed to exist in one of two states: Either they are fresh and ready to be activated, or they are refractory as they just fired an action potential. When a neuron switches from one state to the other depends only on the activation of other neurons in the surrounding network. It turned out that this caricature of the complex neuronal interactions in large networks is actually quite telling: Some very specific statistical fingerprints of the fluctuations in computer simulated networks of finite size are shared with the reduced Markov model. “However, this method can be equally confronted with biological data”, Lagzi and Rotter point out. Therefore, this new method is another step to better understand the activity of populations of nerve cells that underlie brain function – and, as a consequence, to extend our ability to interact with the brain in the treatment and therapy of neurological disorders.
Original publication
Lagzi F and Rotter S (2014) A Markov model for the temporal dynamics of balanced random networks of finite size. Front. Comput. Neurosci. 8:142. doi: 10.3389/fncom.2014.00142
Image caption
A good match: The image shows the dynamic flow extracted from a numerical simulation of a biologically realistic spiking neuronal network (left) and its counterpart corresponding to the scientists’ new population model. Note the excellent match between the vector fields and the so-called nullclines in the simulation and in the model.