Federated Unlearning: Concept & Challenges
Published:
This post is based on the talk Learning and Unlearning Your Data in Federated Settings (PEPR ‘24, USENIX).
이 글은 Learning and Unlearning Your Data in Federated Settings 발표를 기반으로 정리한 내용이다.
Overview
Summary:
An overview of federated unlearning and its key challenges in balancing privacy, efficiency, and model utility.
🔑 Research Question:
- Can federated learning systems support safe and efficient data deletion without full retraining?
⚙️ Conceptual Approaches:
- Passive Unlearning:
- Server-only (leveraging stored updates)
- Client-aided (clients assist with gradient/history)
- Active Unlearning:
- Server and clients collaboratively remove the influence of target data
- Levels of Unlearning:
- Record-level, class-level, or client-level
📊 Key Insights:
- Retraining is reliable but computationally expensive
- Approximate unlearning improves efficiency but weakens guarantees
- Privacy, consistency, and efficiency must be balanced
- Lack of formal verification remains a core challenge
⚠️ Limitations & Open Challenges:
- Verifiability: proving that unlearning actually occurred
- Dynamic participation: handling clients joining/leaving
- Fairness and explainability remain underexplored
- New privacy risks may arise during unlearning
💡 Insight:
Federated unlearning introduces a fundamental tension between data deletion guarantees and system efficiency, suggesting that future work must integrate both cryptographic guarantees and system-level design.
Slides:
PDF (Korean) Download