I’m a Ph.D. candidate at the Department of Cognitive Science and Artificial Intelligence at Tilburg University. I’m part of the NWO-funded project InDeep: Interpreting Deep Learning Models for Text and Sound, and happy to be supervised by Afra Alishahi, Grzegorz Chrupała and Willem Zuidema.
My research is focused on analyzing and interpreting neural language and speech models.
Previously, I completed my Master’s degree in Artificial Intelligence at Iran University of Science and Technology, where my research revolved around interpretability and accelerating inference of pre-trained language models under the supervision of Mohammad Taher Pilehvar.
Before that, I got my Bachelor’s in Computer Engineering from Ferdowsi University of Mashhad. During that, I was working under the supervision of Ahad Harati as a team member of the Nexus RoboCup Simulation Team.
- Reviewed for: ACL 2023, EACL 2023, EMNLP 2022, ACL Rolling Review
- Co-organizer of: InDeep Journal Club, BlackboxNLP 2023 Workshop
- Jan 2023: 🥳 Value Zeroing is out, a new interpretability method customized for Transformers (accepted to EACL 2023 main conference).
- Jan 2023: Presented a poster at ALiAS 2023.
- Dec 2022: BlackboxNLP will be back in 2023 at EMNLP 2023! Happy to be serving as a co-organizer.
- Sep 2022: Gave a guest lecture on Interpretability of Transformers to the graduate Interpretability course at Tilburg University. [slides]
- May 2022: Gave a short talk at InDeep workshop at the University of Amsterdam.
- Feb 2022: 🥳 AdapLeR is out, up to 22x infrence speedup while retaining performance (ACL 2022 main conference).
- Nov 2021: Moved to the Netherlands to join the consortium project InDeep: Interpreting Deep Learning Models for Text and Sound.
- Sep 2021: 🎓 Successfully defended my Master’s thesis titled “Interpretability and Transferability of Linguistic Knowledge in Pre-trained Language Models”.
- Sep 2021: 🥳 Two papers accepted to EMNLP 2021 (main conference and BlackboxNLP).
- Jun 2021: Invited talk at Cambridge/Cardiff Workshop in Natural Language Processing.
- May 2021: Gave a joint guest lecture, with Ali, on Interpretability as a part of the graduate NLP course at Khatam University. [slides]
- Apr 2021: Our pre-print intepretability work is ready! Exploring the Role of BERT Token Representations to Explain Sentence Probing Results.