FedRecovery: Differentially Private Machine Unlearning for Federated Learning Frameworks
Published:
This post is based on the paper FedRecovery: Differentially Private Machine Unlearning for Federated Learning Frameworks.
이 글은 FedRecovery: Differentially Private Machine Unlearning for Federated Learning Frameworks 논문을 기반으로 정리한 내용이다.
Overview
Summary:
A review of FedRecovery, which proposes an efficient approach to machine unlearning in federated learning without retraining.
🔑 Research Question:
- Can we efficiently find a model that performs similarly to the retrained one?
⚙️ Key Mechanism:
- Removes client contributions via weighted gradient residual subtraction.
- Adds carefully calibrated Gaussian noise to ensure indistinguishability from retrained models.
- Does not rely on convexity assumptions or retraining-based calibration.
📊 Main Results:
- Achieves statistical indistinguishability between unlearned and retrained models.
- Maintains comparable accuracy to retraining-based methods.
- Significantly reduces computational cost.
⚠️ Limitations / Open Questions:
- Trade-off between noise calibration and model utility.
- Limited validation on large-scale deep models.
❓ Data Privacy Problem:
- Paper Assumption: The server must identify which client’s updates to remove.
- Works under Local DP (noisy but identifiable updates)
- Breaks under Homomorphic Encryption (updates indistinguishable)
- Naive Idea:
- The requesting client sends its past updates multiplied by -1, encrypted
- Cancels its contribution without revealing gradients
- Suggests a possible direction for client-assisted unlearning under encryption
💡 Insight:
FedRecovery reveals a fundamental tension between privacy and deletability:
while stronger protection (e.g., encryption) hides individual contributions, it also makes precise removal difficult.
Slides:
PDF (Korean) Download