Structural interpretability in SVMs with truncated orthogonal polynomial kernels

Researchers introduce ORCA, a post-training interpretability framework for Support Vector Machines using truncated orthogonal polynomial kernels. The method expands decision functions in explicit RKHS coordinates and quantifies classifier complexity across interaction orders and feature contributions without requiring retraining or surrogate models.
MentionsSupport Vector Machines · Orthogonal Representation Contribution Analysis · Reproducing Kernel Hilbert Space
Read full story at arXiv cs.LG →(arxiv.org)
Modelwire summarizes — we don’t republish. The full article lives on arxiv.org. If you’re a publisher and want a different summarization policy for your work, see our takedown page.