FedIDM: Achieving Fast and Stable Convergence in Byzantine Federated Learning through Iterative Distribution Matching

Researchers propose FedIDM, a Byzantine-robust federated learning method that uses distribution matching to identify malicious clients and stabilize convergence. The approach combines attack-tolerant data generation with contribution-based filtering to maintain model utility while handling colluded adversaries.
MentionsFedIDM
Read full story at arXiv cs.LG →(arxiv.org)
Modelwire summarizes — we don’t republish. The full article lives on arxiv.org. If you’re a publisher and want a different summarization policy for your work, see our takedown page.