Talks

Paper Deep Dive – FedRecovery

August 20, 2025

Summary:
For the lab meeting, I prepared a review of FedRecovery: Differentially Private Machine Unlearning for Federated Learning Frameworks. Machine unlearning aims to make models “forget” specific client data upon deletion requests. Unlike retraining-based solutions, which are often infeasible or risky in federated learning, FedRecovery introduces an efficient method to erase a client’s influence from the global model using a weighted sum of gradient residuals and differential privacy noise, without assuming convexity.

Federated Unlearning: Concept & Challenges

August 13, 2025

Summary:
In preparation for a lab meeting, I studied the concept of Federated Unlearning, which extends the idea of “machine unlearning” to federated learning environments. While federated learning protects raw data by keeping it on clients, requests such as the “Right to be Forgotten” raise a crucial question: How can we safely remove the influence of specific data or clients from a trained federated model? This summary is based on the PEPR ’24 talk Learning and Unlearning Your Data in Federated Settings (USENIX).

Paper Deep Dive – MaskCRYPT

August 06, 2025

Summary:
In this lab meeting, I reviewed MaskCRYPT: Federated Learning With Selective Homomorphic Encryption for Federated Learning. While federated learning protects data from direct leakage, exposing model weights can still lead to serious privacy risks such as membership inference attacks. MASKCRYPT addresses this challenge by selectively encrypting only a small fraction of model updates, striking a balance between security and efficiency under homomorphic encryption.

Paper Deep Dive – HETAL

July 16, 2025

Summary:
In this lab meeting, I reviewed HETAL: Efficient Privacy-preserving Transfer Learning with Homomorphic Encryption. Transfer learning is widely used for data-scarce problems by fine-tuning pre-trained models. While previous studies focused mainly on encrypted inference, HETAL is the first practical scheme that enables encrypted training under homomorphic encryption.