DP-NMT: Scalable Differentially-Private Machine Translation
Published in EMNLP, 2024
DP-NMT investigates scalable differentially-private training for neural machine translation. This work adapts and evaluates DP training approaches across multiple NLP model architectures and benchmark datasets (WMT16, BSD).
Contributions:
- Adapted differentially-private training to multiple NLP models and translation datasets
- Resolved out-of-memory (OOM) issues within the JAX training framework
- Extended the training pipeline to support additional datasets beyond the original scope
Role: Co-author (Student Research Assistant, TrustHLT Lab, TU Darmstadt, May 2023 – November 2024)
