Robust Cognitive–Flexible Filtering

Structure selection that stays reliable — even when the scores are noisy.

Detect structural mismatch,  Switch only when evidence is clear,  and Stabilise under noise — guaranteed.


Robust CF under Noisy Innovation Scores

Noisy ScoreMargin-based RuleStructure SelectionBelief UpdateStable Estimation

A hysteresis-based switching rule that provably suppresses spurious transitions under bounded score noise.

Cognitive Flexibility (CF) enables online latent-structure selection in Bayesian filtering under structural mismatch. In practice, innovation-based scores are estimated from finite data and inevitably carry noise. This work establishes that CF remains robust and analytically tractable under such perturbations, provided the switching margin exceeds twice the noise bound.


What Is the Problem?

Classical CF selects the latent structure \(s_{t+1}\) that minimises an innovation-based predictive score \(\Phi_t(s)\). Under exact scores, three guarantees hold: descent in expectation, finite switching, and non-chattering.

In practice, however, scores are estimated from a particle filter and carry additive perturbations \(\epsilon_t(s)\) satisfying \(\mathbb{E}[\epsilon_t(s)\mid\mathcal{I}_t]=0\) and \(|\epsilon_t(s)|\leq\bar{\varepsilon}\) a.s. When score differences are of the same order as \(\bar{\varepsilon}\), direct score minimisation induces spurious switching — transitions triggered by noise rather than genuine structural mismatch.

⚙ The Margin-Based Solution

We introduce a margin-based switching rule:

\[ s_{t+1} = \begin{cases} \hat{s}_t, & \hat{\Phi}_t(s_t) - \hat{\Phi}_t(\hat{s}_t) > \delta, \\ s_t, & \text{otherwise,} \end{cases} \]

with threshold \(\delta > 2\bar{\varepsilon}\). This single design choice — calibrating the margin against the noise level — is sufficient to restore all three stability properties of the noiseless theory. The design is inspired by hysteresis switching from supervisory control, extended to the stochastic belief-update setting.

📐 Main Theoretical Guarantees

  • Theorem 1 (Descent in Expectation): \(\mathbb{E}[\Phi_t(s_{t+1})\mid\mathcal{I}_t]\leq\Phi_t(s_t)\) — the expected predictive score never increases after a switch.
  • Theorem 2 (Finite Expected Switching): \(\mathbb{E}[N_T]\leq\frac{1}{\delta-2\bar{\varepsilon}} \sum_{t=0}^{T-1}\mathbb{E}[\Phi_t(s_t)-\min_s\Phi_t(s)]\) — total switches scale as \((\delta-2\bar{\varepsilon})^{-1}\).
  • Theorem 3 (Non-Chattering): Once the active structure becomes asymptotically score-optimal, no further switching is triggered a.s. — CF stabilises after finitely many transitions.

🤖 Core Insight

Setting \(\delta = \alpha\bar{\varepsilon}\) with \(\alpha\in(2,4]\) provides a practical operating range: \(\alpha\) close to 2 maximises responsiveness to genuine structural change, while larger \(\alpha\) provides greater noise immunity. The experiments use \(\alpha=2.5\).


Numerical Experiments

All experiments use the canonical nonlinear stochastic growth model [Gordon 1993, Arulampalam 2002] with \(N_p=500\) particles and \(M=100\) Monte Carlo runs over horizon \(T=200\). Score perturbations \(\epsilon_t(s)\sim\mathrm{Uniform}(-\bar{\varepsilon},\bar{\varepsilon})\) are injected independently per structure and time step.

Results Summary (Table I)

Method \(\mathbb{E}[N_T]\) (\(\bar{\varepsilon}=0.5\)) \(\mathbb{E}[N_T]\) (\(\bar{\varepsilon}=1.5\)) \(\mathbb{E}[N_T]\) (\(\bar{\varepsilon}=3.0\))
Exact CF (oracle) 0.30.30.3
CF without margin (\(\delta=0\)) 83.781.179.2
Robust CF (proposed) 0.31.37.9
Thm. 2 bound 8.211.417.6

Lower \(\mathbb{E}[N_T]\) and \(\bar{\Phi}_T\) indicate better performance. Robust CF empirical count stays well below Theorem 2 bound. ✓


Impact and Applications

The margin-based CF mechanism provides a principled solution to robust structure selection in Bayesian filtering. Its implications span:

  • Adaptive state estimation under sensor noise and model uncertainty
  • Particle filter-based systems where score approximation errors are unavoidable
  • Deep state-space models where learned scores carry approximation errors
  • Supervisory control with noisy performance metrics

Reproducibility

All Julia code for numerical experiments is available here:

  • lcss_experiments.jl — Figures 2 & 3, Table I (Theorems 1–3)
  • fig3_scaling.jl — Figure 4, scaling validation (Theorem 2)

Publication

Robust Cognitive–Flexible Filtering under Noisy Innovation Scores