Posts by Collection

publications

A standardized glucose-insulin-potassium infusion protocol in surgical patients: Use of real clinical data from a clinical data warehouse

Published in Diabetes Research and Clinical Practice, 2021

We proposed and validated a standardized infusion protocol using real-world clinical data from a hospital data warehouse. This work highlights how large-scale clinical datasets can support practical decision-making in surgery.

Recommended citation: Tae-jung Oh, Ji-hyung Kook, Se Young Jung, Duck-Woo Kim, Sung Hee Choi, Hong Bin Kim, Hak Chul Jang (2021). "A standardized glucose-insulin-potassium infusion protocol in surgical patients: Use of real clinical data from a clinical data warehouse." Diabetes Research and Clinical Practice, 174:108756.
Download Paper | Download Bibtex

talks

Paper Deep Dive – HETAL

Published:

Summary:
In this lab meeting, I reviewed HETAL: Efficient Privacy-preserving Transfer Learning with Homomorphic Encryption. Transfer learning is widely used for data-scarce problems by fine-tuning pre-trained models. While previous studies focused mainly on encrypted inference, HETAL is the first practical scheme that enables encrypted training under homomorphic encryption.

Paper Deep Dive – MaskCRYPT

Published:

Summary:
In this lab meeting, I reviewed MaskCRYPT: Federated Learning With Selective Homomorphic Encryption for Federated Learning. While federated learning protects data from direct leakage, exposing model weights can still lead to serious privacy risks such as membership inference attacks. MASKCRYPT addresses this challenge by selectively encrypting only a small fraction of model updates, striking a balance between security and efficiency under homomorphic encryption.

Federated Unlearning: Concept & Challenges

Published:

Summary:
In preparation for a lab meeting, I studied the concept of Federated Unlearning, which extends the idea of “machine unlearning” to federated learning environments. While federated learning protects raw data by keeping it on clients, requests such as the “Right to be Forgotten” raise a crucial question: How can we safely remove the influence of specific data or clients from a trained federated model? This summary is based on the PEPR ’24 talk Learning and Unlearning Your Data in Federated Settings (USENIX).

Paper Deep Dive – FedRecovery

Published:

Summary:
For the lab meeting, I prepared a review of FedRecovery: Differentially Private Machine Unlearning for Federated Learning Frameworks. Machine unlearning aims to make models “forget” specific client data upon deletion requests. Unlike retraining-based solutions, which are often infeasible or risky in federated learning, FedRecovery introduces an efficient method to erase a client’s influence from the global model using a weighted sum of gradient residuals and differential privacy noise, without assuming convexity.