I’m a Ph.D. candidate in Computer Science at the University of Massachusetts Lowell, advised by Professor Hadi Amiri, where I study data-efficient LLM training and learning dynamics/interpretability through linguistic complexity signals. In 2025, I was a Research Intern at Google DeepMind working on multilingual factuality evaluation for Gemini. Previously, I received my BS in Computer Science from KAIST.
My work connects two themes: using linguistic complexity to make training more efficient and interpretable, and enabling fine-grained linguistic control over model outputs.
- Data-efficient LLM training: curriculum learning, data selection/ordering.
- Interpretability & learning dynamics: difficulty signals, blind spots.
Current directions
- Curriculum learning for large-scale pretraining: scaling curricula/data schedules to large-scale pretraining; analyzing training dynamics.
- RL for linguistically-controlled generation: reward shaping with partial credit for constraint satisfaction, and studying how different strategies of data ordering improve RL stability.
Selected Publications
Curriculum Learning for LLM Pretraining: An Analysis of Learning Dynamics
Mohamed Elgaar, Hadi Amiri
preprint [PDF]LingGen: Linguistic Fine-grained Controlled Generation
Mohamed Elgaar, Hadi Amiri
arXiv:2410.24201, Under Review at EACL 2026 [PDF]LingConv: An Interactive Toolkit for Controlled Paraphrase Generation with Linguistic Attribute Control
Mohamed Elgaar, Hadi Amiri
EMNLP Demo Track 2025 [PDF] [Demo]Linguistically-Controlled Paraphrase Generation
Mohamed Elgaar, Hadi Amiri
Findings of EMNLP 2025 [PDF]Ling-CL: Multiview Curriculum Learning using Linguistic Complexity
Mohamed Elgaar, Hadi Amiri
EMNLP 2023 [PDF] [Code]
News
- [Dec 2025] Released a preprint on curriculum learning for LLM pretraining (learning dynamics analysis).
- [Nov 2025] Linguistically-Controlled Paraphrase Generation presented at EMNLP 2025.
- [July 2025] MedDecXtract presented at ACL 2025 Demo Track.
- [May 2025] Joined Google DeepMind as a Research Intern (Gemini multilingual factuality).
- [Oct 2024] Released the P-Masking / LingGen preprint on multi-attribute controlled generation.
- [Dec 2023] Presented Ling-CL at EMNLP 2023.
