
Hi there!
I’m a second-year Ph.D. candidate at the Department of CSAI at Tilburg University. I’m part of the NWO-funded project InDeep: Interpreting Deep Learning Models for Text and Sound, and happy to be supervised by Afra Alishahi, Willem Zuidema, and Grzegorz Chrupała.
My research focuses on analyzing and interpreting deep neural models of language (both written and spoken) by treating them as mathematical functions, rather than mere black boxes that take inputs and give outputs. I try to develop analysis methods that can faithfully elucidate the interplay and flow of information within neural networks.
Background
I completed my Master’s degree in Artificial Intelligence at Iran University of Science and Technology, where my research revolved around the interpretation of pre-trained language models and the utilization of interpretability techniques to accelerate their inference time, under the supervision of Mohammad Taher Pilehvar.
Before that, I got my Bachelor’s in Computer Engineering from Ferdowsi University of Mashhad. During that, I was working under the supervision of Ahad Harati as a team member of the Nexus RoboCup Simulation Team.
Services
- I’m a co-organizer of: InDeep Journal Club, BlackboxNLP 2023 Workshop
- I reviewed for the following conferences: EMNLP’23, ACL’23, EACL’23, ACL Rolling Review 2022
News
- Jun 2023: Invited talk on “context mixing in Transformers” at GroNLP, University of Groningen.
- May 2023: Gave a guest lecture on Transformers to an undergraduate CL course at Tilburg University.
- Mar 2023: New blog post: A few thoughts on why Value Zeroing.
- Jan 2023: 🥳 Value Zeroing is out, a new interpretability method customized for Transformers (accepted to EACL’23 main conference).
- Jan 2023: Presented a poster at ALiAS’23.
- Dec 2022: BlackboxNLP will be back in 2023 at EMNLP! Happy to be serving as a co-organizer.
- Sep 2022: Gave a guest lecture on “Interpretability of Transformers” to a graduate Advanced Deep Learning course at Tilburg University. [slides]
- May 2022: Gave a short talk at InDeep workshop at the University of Amsterdam.
- Feb 2022: 🥳 AdapLeR is out, up to 22x infrence speedup while retaining performance (ACL’22 main).
- Nov 2021: Moved to the Netherlands to join the consortium project: InDeep.
- Sep 2021: 🎓 Successfully defended my Master’s thesis titled “Interpretability and Transferability of Linguistic Knowledge in Pre-trained Language Models”.
- Sep 2021: 🥳 Two papers accepted to EMNLP’21 (main conference and BlackboxNLP).
- Jun 2021: Invited talk at Cambridge/Cardiff Workshop in Natural Language Processing.
- May 2021: Gave a joint guest lecture, with Ali, on Interpretability to a graduate NLP course at Khatam University. [slides]
- Apr 2021: Our pre-print intepretability work is ready! Exploring the Role of BERT Token Representations to Explain Sentence Probing Results.