Publications
Ehsan Doostmohammadi, Oskar Holmström, Marco Kuhlmann. 2024. How Reliable Are Automatic Evaluation Methods for Instruction-Tuned LLMs? In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 6321–6336, Miami, Florida, USA. Association for Computational Linguistics.
Alexander Arno Weber, Klaudia Thellmann, Jan Ebert, Nicolas Flores-Herr, Jens Lehmann, Michael Fromm, and Mehdi Ali. 2024. Investigating Multilingual Instruction-Tuning: Do Polyglot Models Demand for Multilingual Instructions?. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 20829–20855, Miami, Florida, USA. Association for Computational Linguistics.
Oskar Holmström, Jenny Kunz. 2024. The impact of language adapters in cross-lingual transfer for NLU. In Proceedings of the 1st Workshop on Modular and Open Multilingual NLP (MOOMIN 2024), pages 24–43, St Julians, Malta. Association for Computational Linguistics.
Annika Simonsen, Hafsteinn Einarsson, Iben Nyholm Debess. 2024. Good or Bad News? Exploring GPT-4 for Sentiment Analysis for Faroese on a Public News Corpora. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 7814–7824, Torino, Italia. ELRA and ICCL.
Mehdi Ali, Michael Fromm, Klaudia Thellmann, Richard Rutmann, Max Lübbering, Johannes Leveling, Katrin Klug, Jan Ebert, Niclas Doll, Jasper Schulze Buschhoff, Charvi Jain, Alexander Arno Weber, Lena Jurkschat, Hammam Abdelwahab, Chelsea John, Pedro Ortiz Suarez, Malte Ostendorff, Samuel Weinbach, Rafet Sifa, Stefan Kesselheim, Nicolas Flores-Herr. 2024. Tokenizer Choice for LLM Training: Negligible or Crucial? In Findings of the Association for Computational Linguistics: NAACL 2024, pages 3907 – 3924, Mexico City, Mexico. Association for Computational Linguistics
Shangrui Nie, Michael Fromm, Charles Welch, Rebekka Görge, Akbar Karimi, Joan Plepi, Nazia Afsan Mowmita, Nicolas Flores-Herr, Mehdi Ali, Lucie Flek. 2024. Do Multilingual Large Language Models Mitigate Stereotype Bias? Proceedings of the 2nd Workshop on Cross-Cultural Considerations in NLP, pages 65-83, Bangkok Thailand, Association for Computational Linguistics