Cognitive Flexibility as a Latent Structural OperatorWhen the model itself is wrong — adapt the representation, not just the parameters. Detect structural mismatch, Select the most predictively consistent representation, and Preserve Bayesian well-posedness — always. Cognitive Flexibility for Bayesian State EstimationBelief → Innovation Score → CF Rule → Structure Selection → Bayesian Update CF acts solely at the representation level — the Bayesian filtering recursion remains structurally unchanged. Cognitive Flexibility (CF) is a belief-level mechanism for online latent-structure selection in Bayesian filtering under structural mismatch. Rather than adapting parameters within a fixed model class, CF selects at each step the latent structure that minimises an innovation-based predictive score, leaving the underlying Bayesian recursion intact. What Is Structural Mismatch?In Bayesian filtering, the posterior belief \(\mathfrak{B}_t\) evolves under a parameterised model class. When the true dynamics lie outside this class, parameter adaptation alone cannot restore predictive consistency — the belief remains well-posed but becomes systematically misaligned with the true data-generating process. This is structural mismatch: an intrinsic failure mode that cannot be eliminated by tuning \(\theta\). ⚙ The CF PipelineFig. 1. The CF pipeline as a latent structural operator. Dashed regions correspond to the three analytical layers: well-posedness (Layer 1), mechanism (Layer 2), and consequences (Layer 3). At each time step, the innovation scores \(\{\Phi(\mathfrak{B}_t,s)\}_{s\in\mathcal{S}}\) are evaluated against the current belief \(\mathfrak{B}_t\) and passed to the CF rule, which selects \(s_{t+1}\) and parameterises the Bayesian update \(\mathfrak{B}_{t+1}= \mathcal{F}_{\theta,s_{t+1}}(\mathfrak{B}_t,u_t,y_{t+1})\). The belief update remains fully Bayesian; CF acts only through the structural update. 📐 Three-Layer Theoretical Analysis
🤖 Core InsightCF enlarges the set of admissible belief trajectories from \(\mathcal{R}_s\) (fixed structure) to \(\mathcal{R}_{\mathrm{CF}}=\bigcup_{s\in\mathcal{S}}\mathcal{R}_s\), strictly expanding representational capacity beyond any fixed model class — and beyond what IMM filtering achieves through probabilistic mixing. Impact and ApplicationsCF establishes a principled belief-level framework for representation adaptation in Bayesian state estimation. Its implications span:
ReproducibilityAll Julia code for numerical experiments is available at thanana.github.io.
PublicationWhen the model is wrong, change the representation — not just the parameters. |