HETAL: Efficient Privacy-preserving Transfer Learning with Homomorphic Encryption
Published:
This post is based on the paper HETAL: Efficient Privacy-preserving Transfer Learning with Homomorphic Encryption.
이 글은 HETAL: Efficient Privacy-preserving Transfer Learning with Homomorphic Encryption 논문을 기반으로 정리한 내용이다.
Overview
Summary:
A review of HETAL, which proposes an efficient framework for privacy-preserving transfer learning under homomorphic encryption.
🔑 Research Question:
- How can transfer learning be made both privacy-preserving and efficient when client data must remain encrypted?
⚙️ Key Mechanism:
- Encrypted Softmax Approximation: designs a precise softmax approximation compatible with HE constraints
- Efficient Matrix Multiplication: introduces an encrypted matrix multiplication method significantly faster than prior approaches
- End-to-end Encrypted Training: enables encrypted fine-tuning with validation-based early stopping
📊 Main Results:
- Demonstrates practical fine-tuning on encrypted data, going beyond prior work focused mainly on encrypted inference
- Reports training times of 567–3442 seconds across five benchmark datasets
- Achieves accuracy comparable to plaintext training in several settings
⚠️ Limitations / Open Questions:
- Accuracy can degrade depending on approximation quality
- Evaluation is limited to moderate-scale models and datasets
- Scalability to larger modern architectures remains unclear
💡 Insight:
HETAL shows that homomorphic encryption is not limited to private inference. With careful approximation and systems optimization, even parts of encrypted training can become practical, suggesting a path toward more realistic privacy-preserving ML pipelines.
Slides:
PDF (Korean) Download