
Hi there!
I’m a final-year PhD candidate at the Department of CSAI at Tilburg University, the Netherlands. I’m part of the consortium project InDeep: Interpreting Deep Learning Models for Text and Sound, and happy to be supervised by Afra Alishahi, Willem Zuidema, and Grzegorz Chrupała.
My research focuses on interpreting deep neural language models (both written and spoken). I try to develop analysis methods to trace information flow and contextual interactions within these models, aiming to understand their inner workings and make that understanding useful for something such as improving model efficiency, controllability, and safety.
Background
I was a visiting researcher (Jan-Mar 2024) at ILCC, School of Informatics, University of Edinburgh, worked with Ivan Titov.
I completed my Master’s (2019-2021) in Artificial Intelligence at Iran University of Science and Technology, where my research revolved around the interpretation of pre-trained language models and the utilization of interpretability techniques to accelerate their inference time, under the supervision of Mohammad Taher Pilehvar.
Before that, I got my Bachelor’s (2014-2019) in Computer Engineering from Ferdowsi University of Mashhad. During that, I was working under the supervision of Ahad Harati as a team member of the Nexus RoboCup Simulation Team.
Public Activities
- Workshop Organizer
- BlackboxNLP (co-located with EMNLP 2023, 2024, 2025)
- Tutorial Instructor
- 💥Upcoming Tutorial on “Interpretability Techniques for Speech Models” at Interspeech 2025 conference
- Tutorial on “Transformer-specific Interpretability” at EACL 2024 conference
- Area Chair / meta-reviewer
- Reviewer
- Conferences: EMNLP (2022, 2023), ACL 2023, EACL 2023, ACL Rolling Review (2022, 2023)
- Workshops: Actionable Interpretability (ICML 2025)
- Others
- I organized InDeep Journal Club (2022-2024)
Highlighted news
- May 2025: Two accepted papers to Interspeech 2025: language-specific pretraining (Wav2Vec2-NL), and on the reliability of feature attribution for speech
- Dec 2024: 📺 A series of short videos on Transformer Interpretability available on YouTube!
- Oct 2024: Check out our new preprint: Disentangling Textual and Acoustic Features of Neural Speech Representations
- Mar 2024: Materials (slides, notebooks, etc.) for EACL 2024 tutorial on “Transformer-specific Interpretability” are available here.
- Dec 2023: 🏅 Got an Outstanding Paper Award for Homophone Disambiguation Reveals Patterns of Context Mixing in Speech Transformers at EMNLP 2023!
- Mar 2023: Blog Post: A few thoughts on why Value Zeroing.