Modelwire
Subscribe

Structural interpretability in SVMs with truncated orthogonal polynomial kernels

Researchers introduce ORCA, a post-training interpretability framework for Support Vector Machines using truncated orthogonal polynomial kernels. The method expands decision functions in explicit RKHS coordinates and quantifies classifier complexity across interaction orders and feature contributions without requiring retraining or surrogate models.

MentionsSupport Vector Machines · Orthogonal Representation Contribution Analysis · Reproducing Kernel Hilbert Space

Modelwire summarizes — we don’t republish. The full article lives on arxiv.org. If you’re a publisher and want a different summarization policy for your work, see our takedown page.

Related

Stability and Generalization in Looped Transformers

arXiv cs.LG·

Learning to Think Like a Cartoon Captionist: Incongruity-Resolution Supervision for Multimodal Humor Understanding

arXiv cs.CL·

Making AI operational in constrained public sector environments

Structural interpretability in SVMs with truncated orthogonal polynomial kernels · Modelwire