Hi there!
I’m a Ph.D. candidate at the Department of Cognitive Science and Artificial Intelligence at Tilburg University. I’m part of the NWO-funded project InDeep: Interpreting Deep Learning Models for Text and Sound, and happy to be supervised by Afra Alishahi, Grzegorz Chrupała and Jelle Zuidema.
My main research interest lies in the area of Deep Learning in Natural Language Processing, with a particular interest in analyzing and interpreting neural language models.
Previously, I completed my Master’s degree in Artificial Intelligence at Iran University of Science and Technology, where I focused mostly on interpretability and accelerating inference of pre-trained language models under the supervision of Mohammad Taher Pilehvar.
Before that, I got my Bachelor’s in Computer Engineering from Ferdowsi University of Mashhad. During that, I was working under the supervision of Ahad Harati as a team member of the Nexus RoboCup Simulation Team.
News
- May 2022: Presented a poster at ACL 2022 in Dublin, Ireland.
- May 2022: Gave a short talk at InDeep workshop at the University of Amsterdam.
- Feb 2022: 🥳 Our paper on improving model efficiency got accepted to ACL 2022 main conference.
- Jan 2022: Serving as a reviewer for ACL Rolling Review since January 2022.
- Nov 2021: Moved to the Netherlands to join the consortium project InDeep: Interpreting Deep Learning Models for Text and Sound.
- Nov 2021: Gave an oral presentation at EMNLP 2021.
- Sep 2021: 🎓 Successfully defended my Master’s thesis titled “Interpretability and Transferability of Linguistic Knowledge in Pre-trained Language Models”.
- Sep 2021: 🥳 Two papers accepted to EMNLP 2021 (main conference and BlackboxNLP).
- Jun 2021: Invited talk at Cambridge/Cardiff Workshop in Natural Language Processing.
- May 2021: Gave a joint guest lecture, with Ali, on Interpretability as a part of the graduate NLP course at Khatam University. [slide]
- Apr 2021: Our pre-print intepretability work is ready! Exploring the Role of BERT Token Representations to Explain Sentence Probing Results.