The motivation
Why Linear Operators Deserve Their Own Story
Linear differential operators appear everywhere in neural mass modeling—synaptic kinetics, population filters, dendritic cable reductions—yet they are often introduced as technical machinery and quickly passed over. In Appendix J of our Rosetta Stone paper, we decided to give them the pedagogical treatment they deserve. The reward is a surprisingly intuitive picture: every synapse in a neural mass model behaves like a mass-spring-damper system, and the entire question of "when does an E–I loop oscillate?" reduces to a classical feedback condition from control theory.
This post walks through the key ideas, building from the simplest first-order filter all the way to the Barkhausen conditions for self-oscillation in Wilson–Cowan and Jansen–Rit circuits.
The framework
The Linear Operator: Two Complementary Views
Every linear synaptic filter in a neural mass model can be written as a differential operator $L$ acting on an output variable:
There are two canonical objects we can extract from $L$: the homogeneous solution $L[x_h] = 0$ (the system's natural modes of ringing), and the impulse response $h(t)$ that solves $L[h] = \delta$ (how the system responds to a kick). These lead to two complementary analysis frameworks:
Laplace View (Frequency Domain)
Transform to $s$-space: $L$ becomes a polynomial $P(s) = \sum a_k s^k$. The transfer function $H(s) = 1/P(s)$ immediately reveals poles (decay rates, oscillation frequencies) and the frequency response $H(j\omega)$ gives magnitude and phase at any frequency.
Green's Function View (Time Domain)
The causal Green's function $G(t, t_0)$ satisfies $L[G] = \delta(t - t_0)$. For any input $u$, the output is a convolution: $x(t) = \int G(t, t_0)\,u(t_0)\,dt_0$. This shows how past inputs are weighted and delayed—the system's "memory."
Both views describe exactly the same mathematics. Laplace uses exponentials $e^{st}$ to expose poles and phase; Green uses localized impulses $\delta(t-t_0)$ to show how past inputs are weighted. We use whichever is more convenient.
A gentle taxonomy
Climbing the Order Ladder
Let us build intuition by increasing the order of $L$ one step at a time, seeing how each new ingredient changes the synaptic filter's behavior.
Zeroth Order: Memoryless Gain
First Order: The Leaky Integrator
This is the workhorse of Wilson–Cowan models: a single time constant $\tau = a/b$ that smooths and delays the input.
Second Order: The Mass-Spring-Damper (The Alpha Kernel)
This is where things get really interesting—and where the physical analogy becomes exact. A second-order operator is equivalent to a damped harmonic oscillator driven by a force:
The character of the impulse response depends on the roots of the characteristic polynomial $m s^2 + as + b = 0$:
The critically damped case is particularly important: it produces the alpha kernel used in the Jansen–Rit model. The impulse response rises, peaks, and decays—producing the characteristic post-synaptic potential waveform. The peak time is controlled by the mass parameter $m$:
A synapse modeled by a second-order operator is literally a mass-spring-damper: the presynaptic drive is the force, the post-synaptic potential is the displacement, the "mass" $m$ creates inertia that produces a delayed peak, and the damping $a$ controls how quickly the response decays. Increasing the mass increases the causal delay: $t_{\text{peak}} \sim \frac{\pi}{2}\sqrt{m/b}$.
The origin of oscillations
E–I Motifs and the Barkhausen Conditions
With the filter toolkit in hand, we can now ask the central question: when does a loop of excitatory and inhibitory populations oscillate? The answer comes from classical feedback theory.
The Simplest E–I Loop: Two Coupled First-Order Filters
We start by rewriting the undamped harmonic oscillator as a pair of coupled leaky integrators. Let $z = x + iy$ and consider:
Barkhausen: The Phase-Gain Budget for Self-Oscillation
For a general feedback loop with transfer function $L(s)$, the necessary conditions for linear self-oscillation at frequency $\omega_0$ are the Barkhausen conditions:
This is the universal recipe: oscillation requires enough gain around the loop (magnitude condition) and enough phase accumulation to complete a full 360° cycle (phase condition). Let's see how this plays out in the two main neural mass architectures.
Wilson–Cowan with First-Order Synapses
Linearizing Wilson–Cowan around a fixed point with sigmoid slope $\kappa$ and unit time constants:
The key insight: two first-order elements alone cannot supply 360° of phase. Self-excitation, an explicit transmission delay, or a higher-order synaptic filter is needed to close the phase budget.
Jansen–Rit with Second-Order Synapses
The Jansen–Rit model upgrades to second-order synaptic filters, which are band-pass rather than low-pass:
Phase and delay
Information Flow and Effective Loop Delay
For narrowband loops near resonance, each element's phase behaves approximately as $\varphi_k(\omega) \approx -\omega\,\tau_k$, where $\tau_k$ is the group delay. The Barkhausen phase condition $\sum_k \varphi_k(\omega_0) = 0°$ then implies:
This result connects the abstract Barkhausen conditions to a very concrete physical picture: the oscillation frequency is determined by the total signal travel time around the E–I loop, which includes both axonal conduction delays and the effective delays introduced by synaptic filtering.
Quick Reference: Synaptic Operator Taxonomy
| Order | Operator | Impulse Response | Neural Mass Use |
|---|---|---|---|
| 0th | $by = f$ | $\frac{1}{b}\delta(t)$ (instantaneous) | Static gain; memoryless synapses |
| 1st | $a\dot{y} + by = f$ | $\frac{1}{a}e^{-(b/a)t}$ (exponential decay) | Wilson–Cowan synapses |
| 2nd (critical) | $m\ddot{y} + a\dot{y} + by = f$ | $\frac{1}{m}t\,e^{-\frac{a}{2m}t}$ (alpha kernel) | Jansen–Rit / NMM1 |
| 2nd (underdamped) | Same | $\frac{1}{m\omega_d}e^{-\alpha t}\sin(\omega_d t)$ (ringing) | Resonant synapses; gamma filters |
Why it matters
What This Means in Practice
Model Selection Made Physical
If your neural mass model only needs low-pass filtering (slow dynamics, no oscillatory synaptic ringing), first-order operators suffice—this is the Wilson–Cowan regime. If you need realistic PSP shapes, causal delays from synaptic inertia, or alpha–gamma interactions, second-order operators are essential—this is the Jansen–Rit/NMM1 regime. The choice of operator order is a choice about how much temporal structure your synapses carry.
The Phase Budget Guides Oscillation Frequency
Oscillation frequencies in neural mass models are not free parameters—they are determined by the Barkhausen phase condition applied to the full E–I loop. Changing synaptic time constants, adding axonal delays, or upgrading from first- to second-order filters all change the phase budget and thus shift the oscillation frequency. This gives a principled, quantitative handle on how pharmacological or neuromodulatory interventions alter brain rhythms.
Synaptic Mass = Causal Delay
The "mass" parameter $m$ in the second-order operator creates genuine causal delay in the synaptic response—not a time shift, but inertial buildup followed by decay. Increasing $m$ pushes $t_{\text{peak}} \sim \frac{\pi}{2}\sqrt{m/b}$ later, which increases the effective loop delay and lowers the oscillation frequency. This is why synaptic time constants and dendritic filtering directly control the spectral properties of neural oscillations.
Every synapse in a neural mass model is a linear filter—and every linear filter is, at heart, a mass-spring-damper. The Laplace and Green's function views give complementary insight: poles and transfer functions for frequency-domain analysis, impulse responses and convolutions for time-domain intuition. The question "when does an E–I loop oscillate?" has a precise answer: when the Barkhausen conditions are met, meaning the total gain around the loop equals unity and the total phase accumulates to 360°. This classical feedback picture unifies Wilson–Cowan (first-order) and Jansen–Rit (second-order) oscillations within a single framework.
Reference
Castaldo, F., de Palma Aristides, R., Clusella, P., Garcia-Ojalvo, J., & Ruffini, G. (2025). Rosetta Stone of Neural Mass Models — Appendix J: Linear Operators, Green–Laplace Tools, and E–I Oscillations. arXiv:2512.10982. https://arxiv.org/abs/2512.10982