Pedagogical Deep Dive

Synapses as Springs: Linear Operators, Green–Laplace Tools, and the Origin of E–I Oscillations

A pedagogical tour of the linear differential operators that power every neural mass model—from leaky integrators to the Barkhausen conditions that make excitatory–inhibitory loops oscillate.

From Appendix J of Rosetta Stone of Neural Mass Models  |  arXiv:2512.10982
GR
Giulio Ruffini & Francesca Castaldo
BCOM & Neuroelectrics
10 min read

Why Linear Operators Deserve Their Own Story

Linear differential operators appear everywhere in neural mass modeling—synaptic kinetics, population filters, dendritic cable reductions—yet they are often introduced as technical machinery and quickly passed over. In Appendix J of our Rosetta Stone paper, we decided to give them the pedagogical treatment they deserve. The reward is a surprisingly intuitive picture: every synapse in a neural mass model behaves like a mass-spring-damper system, and the entire question of "when does an E–I loop oscillate?" reduces to a classical feedback condition from control theory.

This post walks through the key ideas, building from the simplest first-order filter all the way to the Barkhausen conditions for self-oscillation in Wilson–Cowan and Jansen–Rit circuits.

The Linear Operator: Two Complementary Views

Every linear synaptic filter in a neural mass model can be written as a differential operator $L$ acting on an output variable:

The General Linear Operator
$$L[x](t) = u(t), \qquad L = \sum_{k=0}^{n} a_k\,\partial_t^k$$
$L$ maps an output signal $x(t)$ to an input drive $u(t)$. The coefficients $a_k$ encode the filter's gain, time constants, and inertia. $L$ is the inverse of the convolution kernel—it tells you the rule the filter obeys, rather than its impulse response.

There are two canonical objects we can extract from $L$: the homogeneous solution $L[x_h] = 0$ (the system's natural modes of ringing), and the impulse response $h(t)$ that solves $L[h] = \delta$ (how the system responds to a kick). These lead to two complementary analysis frameworks:

Laplace View (Frequency Domain)

Transform to $s$-space: $L$ becomes a polynomial $P(s) = \sum a_k s^k$. The transfer function $H(s) = 1/P(s)$ immediately reveals poles (decay rates, oscillation frequencies) and the frequency response $H(j\omega)$ gives magnitude and phase at any frequency.

Green's Function View (Time Domain)

The causal Green's function $G(t, t_0)$ satisfies $L[G] = \delta(t - t_0)$. For any input $u$, the output is a convolution: $x(t) = \int G(t, t_0)\,u(t_0)\,dt_0$. This shows how past inputs are weighted and delayed—the system's "memory."

Both views describe exactly the same mathematics. Laplace uses exponentials $e^{st}$ to expose poles and phase; Green uses localized impulses $\delta(t-t_0)$ to show how past inputs are weighted. We use whichever is more convenient.

Climbing the Order Ladder

Let us build intuition by increasing the order of $L$ one step at a time, seeing how each new ingredient changes the synaptic filter's behavior.

Zeroth Order: Memoryless Gain

Order 0 — Instantaneous
$$b\,y = f \quad \Longrightarrow \quad h(t) = \tfrac{1}{b}\,\delta(t)$$
No temporal memory at all. Output is an instantaneous rescaling of input. This is the limit of infinitely fast synapses.

First Order: The Leaky Integrator

This is the workhorse of Wilson–Cowan models: a single time constant $\tau = a/b$ that smooths and delays the input.

Order 1 — Leaky Integrator
$$a\,\dot{y} + b\,y = f \qquad \Longrightarrow \qquad h(t) = \tfrac{1}{a}\,e^{-(b/a)\,t}\,H(t)$$
In Laplace: $H(s) = \frac{1}{b}\cdot\frac{1}{1 + s\tau}$ with $\tau = a/b$. One real pole at $s = -b/a$. The filter is a low-pass with time constant $\tau$ and group delay $\tau_g(\omega) = \tau/(1 + (\omega\tau)^2)$. If $b = 0$, we get a pure integrator with perfect memory: $H(s) = 1/(as)$.

Second Order: The Mass-Spring-Damper (The Alpha Kernel)

This is where things get really interesting—and where the physical analogy becomes exact. A second-order operator is equivalent to a damped harmonic oscillator driven by a force:

Order 2 — The Synapse as a Forced Harmonic Oscillator
$$m\,\ddot{y} + a\,\dot{y} + b\,y = f(t)$$
This is literally Newton's second law for a mass $m$ on a spring (stiffness $b$) with damping $a$, driven by a force $f(t)$. In neural mass models, $y$ is a post-synaptic potential, $f$ is the presynaptic drive, and $m$ summarizes effective inertial storage across coupled first-order elements.

The character of the impulse response depends on the roots of the characteristic polynomial $m s^2 + as + b = 0$:

Characteristic Roots
$$r_{1,2} = \frac{-a \pm \sqrt{a^2 - 4mb}}{2m}$$
Three regimes: Overdamped ($a^2 > 4mb$): two real poles, no oscillation. Critical ($a^2 = 4mb$): repeated real pole, the famous alpha kernel $h(t) = (1/m)\,t\,e^{-\frac{a}{2m}t}$. Underdamped ($a^2 < 4mb$): complex poles, damped sinusoidal ringing.

The critically damped case is particularly important: it produces the alpha kernel used in the Jansen–Rit model. The impulse response rises, peaks, and decays—producing the characteristic post-synaptic potential waveform. The peak time is controlled by the mass parameter $m$:

Alpha Kernel (Critically Damped)
$$h(t) = A\,a\,t\,e^{-at}\,H(t), \qquad (\partial_t + a)^2 y(t) = A\,a\,\sigma(t)$$
The alpha kernel is the impulse response of a critically damped oscillator driven by the presynaptic firing rate $\sigma(t)$. Its delayed peak is a causal, buffer-like delay—not a noncausal time shift. This is the synaptic operator at the heart of the Jansen–Rit model.
Key Physical Insight

A synapse modeled by a second-order operator is literally a mass-spring-damper: the presynaptic drive is the force, the post-synaptic potential is the displacement, the "mass" $m$ creates inertia that produces a delayed peak, and the damping $a$ controls how quickly the response decays. Increasing the mass increases the causal delay: $t_{\text{peak}} \sim \frac{\pi}{2}\sqrt{m/b}$.

E–I Motifs and the Barkhausen Conditions

With the filter toolkit in hand, we can now ask the central question: when does a loop of excitatory and inhibitory populations oscillate? The answer comes from classical feedback theory.

The Simplest E–I Loop: Two Coupled First-Order Filters

We start by rewriting the undamped harmonic oscillator as a pair of coupled leaky integrators. Let $z = x + iy$ and consider:

Push–Pull as Coupled Leaky Integrators
$$\dot{x} = ax - \omega y, \qquad \dot{y} = ay + \omega x$$
Each variable is a leaky integrator driven by the other with a 90° phase shift. When $a = 0$ the loop produces sustained oscillations; when $a > 0$ the envelope decays as $e^{-at}$. This is the simplest push–pull template.

Barkhausen: The Phase-Gain Budget for Self-Oscillation

For a general feedback loop with transfer function $L(s)$, the necessary conditions for linear self-oscillation at frequency $\omega_0$ are the Barkhausen conditions:

Barkhausen Conditions
$$|L(j\omega_0)| = 1, \qquad \arg L(j\omega_0) = 0^\circ \;\text{(mod } 360^\circ\text{)}$$
The phase condition pins $\omega_0$ by balancing the phase lags of all elements in the loop. The magnitude condition pins the product of all gains, including the slope $\kappa$ of the nonlinearity at the bias point. Nonlinear saturation then stabilizes the amplitude.

This is the universal recipe: oscillation requires enough gain around the loop (magnitude condition) and enough phase accumulation to complete a full 360° cycle (phase condition). Let's see how this plays out in the two main neural mass architectures.

Wilson–Cowan with First-Order Synapses

Linearizing Wilson–Cowan around a fixed point with sigmoid slope $\kappa$ and unit time constants:

Wilson–Cowan Linearized E–I Loop
$$\dot{x} = -(1 - w_{ee}\kappa)\,x - w_{ei}\kappa\,y + I_x, \qquad \dot{y} = -y + w_{ie}\kappa\,x - w_{ii}\kappa\,y$$
The E→I→E loop transfer function is $T_{EI}(s) = \frac{w_{ei}\,w_{ie}\,\kappa^2}{(s+1)^2}$, which can supply up to $-180°$ of phase on its own. That's not enough to close the loop at a finite $\omega$. Self-excitation $T_{EE}(s) = \frac{w_{ee}\kappa}{s+1}$ provides the extra phase needed to satisfy Barkhausen. A bias $I_x > 0$ (ensuring $\kappa > 0$) is essential.

The key insight: two first-order elements alone cannot supply 360° of phase. Self-excitation, an explicit transmission delay, or a higher-order synaptic filter is needed to close the phase budget.

Jansen–Rit with Second-Order Synapses

The Jansen–Rit model upgrades to second-order synaptic filters, which are band-pass rather than low-pass:

Jansen–Rit Transfer Functions
$$H_{\text{exc}}(s) = \frac{Aa\kappa}{(s+a)^2}, \qquad H_{\text{inh}}(s) = \frac{Bb\kappa}{(s+b)^2}$$
The E→I→E loop transfer is $T(s) = -w^2\,H_{\text{exc}}(s)\,H_{\text{inh}}(s)$. Each second-order filter can provide up to $-180°$ of phase around its passband, so the cascade can easily reach $-360°$. Self-excitation is not required to satisfy Barkhausen—the E–I loop alone can oscillate.
Jansen–Rit Loop Transfer
$$T(s) = -w^2\,H_{\text{exc}}(s)\,H_{\text{inh}}(s) = \frac{-w^2 A a\kappa \cdot B b\kappa}{(s+a)^2(s+b)^2}$$
The Barkhausen magnitude condition $|T(j\omega_0)| = 1$ sets the oscillation frequency $\omega_0$, which is controlled by the synaptic poles $a$ and $b$ and any axonal conduction delay.

Information Flow and Effective Loop Delay

For narrowband loops near resonance, each element's phase behaves approximately as $\varphi_k(\omega) \approx -\omega\,\tau_k$, where $\tau_k$ is the group delay. The Barkhausen phase condition $\sum_k \varphi_k(\omega_0) = 0°$ then implies:

Effective Loop Delay
$$\omega_0 \approx \frac{2\pi n}{\tau_{\text{loop}}}, \qquad \tau_{\text{loop}} = \sum_k \frac{\varphi_k(\omega_0)}{\omega_0}$$
The oscillation frequency is set by the total effective delay around the loop. Second-order synapses provide larger group delay around their passband than first-order ones. This is one reason gamma-range E–I oscillations typically require at least second-order synaptic dynamics.

This result connects the abstract Barkhausen conditions to a very concrete physical picture: the oscillation frequency is determined by the total signal travel time around the E–I loop, which includes both axonal conduction delays and the effective delays introduced by synaptic filtering.

Quick Reference: Synaptic Operator Taxonomy

Order Operator Impulse Response Neural Mass Use
0th $by = f$ $\frac{1}{b}\delta(t)$ (instantaneous) Static gain; memoryless synapses
1st $a\dot{y} + by = f$ $\frac{1}{a}e^{-(b/a)t}$ (exponential decay) Wilson–Cowan synapses
2nd (critical) $m\ddot{y} + a\dot{y} + by = f$ $\frac{1}{m}t\,e^{-\frac{a}{2m}t}$ (alpha kernel) Jansen–Rit / NMM1
2nd (underdamped) Same $\frac{1}{m\omega_d}e^{-\alpha t}\sin(\omega_d t)$ (ringing) Resonant synapses; gamma filters

What This Means in Practice

Model Selection Made Physical

If your neural mass model only needs low-pass filtering (slow dynamics, no oscillatory synaptic ringing), first-order operators suffice—this is the Wilson–Cowan regime. If you need realistic PSP shapes, causal delays from synaptic inertia, or alpha–gamma interactions, second-order operators are essential—this is the Jansen–Rit/NMM1 regime. The choice of operator order is a choice about how much temporal structure your synapses carry.

The Phase Budget Guides Oscillation Frequency

Oscillation frequencies in neural mass models are not free parameters—they are determined by the Barkhausen phase condition applied to the full E–I loop. Changing synaptic time constants, adding axonal delays, or upgrading from first- to second-order filters all change the phase budget and thus shift the oscillation frequency. This gives a principled, quantitative handle on how pharmacological or neuromodulatory interventions alter brain rhythms.

Synaptic Mass = Causal Delay

The "mass" parameter $m$ in the second-order operator creates genuine causal delay in the synaptic response—not a time shift, but inertial buildup followed by decay. Increasing $m$ pushes $t_{\text{peak}} \sim \frac{\pi}{2}\sqrt{m/b}$ later, which increases the effective loop delay and lowers the oscillation frequency. This is why synaptic time constants and dendritic filtering directly control the spectral properties of neural oscillations.

The Bottom Line

Every synapse in a neural mass model is a linear filter—and every linear filter is, at heart, a mass-spring-damper. The Laplace and Green's function views give complementary insight: poles and transfer functions for frequency-domain analysis, impulse responses and convolutions for time-domain intuition. The question "when does an E–I loop oscillate?" has a precise answer: when the Barkhausen conditions are met, meaning the total gain around the loop equals unity and the total phase accumulates to 360°. This classical feedback picture unifies Wilson–Cowan (first-order) and Jansen–Rit (second-order) oscillations within a single framework.

Castaldo, F., de Palma Aristides, R., Clusella, P., Garcia-Ojalvo, J., & Ruffini, G. (2025). Rosetta Stone of Neural Mass Models — Appendix J: Linear Operators, Green–Laplace Tools, and E–I Oscillations. arXiv:2512.10982. https://arxiv.org/abs/2512.10982