Tutorials
- A series of short educational videos on how to open the Black Box of Large Language Models:
- Why it is crucial to track how Transformers mix information [YouTube]
- How to best measure context-mixing in Transformers [YouTube]
- “Transformer-specific Interpretability”, at EACL 2024 conference, in Malta [materials]
Invited Talks
- Apr 2024: CLILLAC-ARP, Université Paris Cité, Online
- Mar 2024: TeIAS, Online
- Feb 2024: CardiffNLP, Cardiff University
- Feb 2024: CSTR, University of Edinburgh
- Jun 2023: GroNLP, University of Groningen
- Jun 2021: Cambridge/Cardiff Workshop in Natural Language Processing, Online
Co-lectures
- Winter 2023, 2024: Machine Learning, Tilburg University
- Winter 2023, 2024: Methods for Responsible AI, Tilburg University
Guest Lectures
- May 2023: “Introduction to Transformers”, undergraduate Computational Linguistic course at Tilburg University
- Sep 2022: “Interpretability of Transformers”, graduate Advanced Deep Learning course at Tilburg University
- May 2021: “Analysis & Interpretability in NLP”, jointly presented with Ali Modarressi, graduate NLP course at Khatam University [slides]
Teaching Assistant
- Winter 2021: Natural Language Processing, Khatam University [webpage]
- 2017-2019: Artificial Intelligence and Expert Systems, Ferdowsi University
- 2016-2019: Computer Architecture, Ferdowsi University
- 2016-2017: Data Structures and Algorithms, Ferdowsi University
- 2016-2019: Basics of Computer Programming and Algorithms, Ferdowsi University
Students Co-supervised
- 2024: Hamidreza Amirzadeh. MS. Topic: Interpretability (published at BlackboxNLP 2024)
- 2021: Mohsen Fayyaz, Ehsan Aghazadeh. BS. Topic: Interpretability (published at BlackboxNLP 2021)