The Pawgrammer Begins
Published:

The beginning of my journey as The Pawgrammer 🐾
Published:

The beginning of my journey as The Pawgrammer 🐾
Published:
This post is based on the paper HETAL: Efficient Privacy-preserving Transfer Learning with Homomorphic Encryption.
이 글은 HETAL: Efficient Privacy-preserving Transfer Learning with Homomorphic Encryption 논문을 기반으로 정리한 내용이다.
Published:
This post is based on the paper MaskCRYPT: Federated Learning With Selective Homomorphic Encryption.
이 글은 MaskCRYPT: Federated Learning With Selective Homomorphic Encryption 논문을 기반으로 정리한 내용이다.
Published:
This post is based on the talk Learning and Unlearning Your Data in Federated Settings (PEPR ‘24, USENIX).
이 글은 Learning and Unlearning Your Data in Federated Settings 발표를 기반으로 정리한 내용이다.
Published:
This post is based on the paper FedRecovery: Differentially Private Machine Unlearning for Federated Learning Frameworks.
이 글은 FedRecovery: Differentially Private Machine Unlearning for Federated Learning Frameworks 논문을 기반으로 정리한 내용이다.
Published:
This post is based on the paper CryptoGCN: Fast and Scalable Homomorphically Encrypted Graph Convolutional Network Inference.
Published in Diabetes Research and Clinical Practice, 2021
We proposed and validated a standardized infusion protocol using real-world clinical data from a hospital data warehouse. This work highlights how large-scale clinical datasets can support practical decision-making in surgery.
Recommended citation: Tae-jung Oh, Ji-hyung Kook, Se Young Jung, Duck-Woo Kim, Sung Hee Choi, Hong Bin Kim, Hak Chul Jang (2021). "A standardized glucose-insulin-potassium infusion protocol in surgical patients: Use of real clinical data from a clinical data warehouse." Diabetes Research and Clinical Practice, 174:108756.
Download Paper | Download Bibtex
Published:
Summary:
In this lab meeting, I reviewed HETAL: Efficient Privacy-preserving Transfer Learning with Homomorphic Encryption. Transfer learning is widely used for data-scarce problems by fine-tuning pre-trained models. While previous studies focused mainly on encrypted inference, HETAL is the first practical scheme that enables encrypted training under homomorphic encryption.
🔑 Research Question:
⚙️ Key Mechanism:
📊 Main Results:
⚠️ Limitations:
Slides:
PDF (Korean) Download
Published:
Summary:
In this lab meeting, I reviewed MaskCRYPT: Federated Learning With Selective Homomorphic Encryption for Federated Learning. While federated learning protects data from direct leakage, exposing model weights can still lead to serious privacy risks such as membership inference attacks. MASKCRYPT addresses this challenge by selectively encrypting only a small fraction of model updates, striking a balance between security and efficiency under homomorphic encryption.
🔑 Research Question:
⚙️ Key Mechanism: Selective Homomorphic Encryption
📊 Main Results:
⚠️ Limitations:
Slides:
PDF (Korean) Download
Published:
Summary:
In preparation for a lab meeting, I studied the concept of Federated Unlearning, which extends the idea of “machine unlearning” to federated learning environments. While federated learning protects raw data by keeping it on clients, requests such as the “Right to be Forgotten” raise a crucial question: How can we safely remove the influence of specific data or clients from a trained federated model? This summary is based on the PEPR ’24 talk Learning and Unlearning Your Data in Federated Settings (USENIX).
🔑 Research Question:
⚙️ Conceptual Approaches:
📊 Key Insightes:
⚠️ Limitations & Open Challenges:
🎥 Reference:
Slides:
PDF (Korean) Download
Published:
Summary:
For the lab meeting, I prepared a review of FedRecovery: Differentially Private Machine Unlearning for Federated Learning Frameworks. Machine unlearning aims to make models “forget” specific client data upon deletion requests. Unlike retraining-based solutions, which are often infeasible or risky in federated learning, FedRecovery introduces an efficient method to erase a client’s influence from the global model using a weighted sum of gradient residuals and differential privacy noise, without assuming convexity.
🔑 Research Question:
⚙️ Key Mechanism:
📊 Main Results:
⚠️ Limitations / Open Questions:
❓ Data Privacy Problem
Slides:
PDF (Korean) Download