Publications

2023

Alexander Bondarenko, Maik Fröbe, Johannes Kiesel, Ferdinand Schlatt, Valentin Barriere, Brian Ravenet, Léo Hemamou, Simon Luck, Jan Heinrich Reimer, Benno Stein, Martin Potthast, and Matthias Hagen. Overview of Touché 2023: Argument and Causal Retrieval. In Avi Arampatzis et al., editors, Experimental IR Meets Multilinguality, Multimodality, and Interaction. 14th International Conference of the CLEF Association (CLEF 2023), Lecture Notes in Computer Science, September 2023. Springer. [bib] [copylink] [ecir invited paper] [event]
Tobias Schreieder and Jan Braker. Touché 2022 Best of Labs: Neural Image Retrieval for Argumentation. In Mohammad Aliannejadi, Guglielmo Faggioli, Nicola Ferro, and Michalis Vlachos, editors, Working Notes of CLEF 2023 - Conference and Labs of the Evaluation Forum, September 2023. CEUR-WS.org. [bib] [copylink] [slides]
Abdul Aziz, MD. Akram Hossain, and Abu Nowshed Chy. CSECU-DSG at SemEval-2023 Task 4: Fine-tuning DeBERTa Transformer Model with Cross-fold Training and Multi-sample Dropout for Human Values Identification. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 1988–1994, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Georgios Balikas. John-Arthur at SemEval-2023 Task 4: Fine-Tuning Large Language Models for Arguments Classification. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 1428–1432, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Fidan Can. Tübingen at SemEval-2023 Task 4: What Can Stance Tell? A Computational Study on Detecting Human Values behind Arguments. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 1763–1768, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Nordin El Balima Cordero, Jacinto Mata, Victoria Pachón, and Abel Pichardo Estevez. I2C Huelva at SemEval-2023 Task 4: A Resampling and Transformers Approach to Identify Human Values behind Arguments. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 1382–1387, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Christian Fang, Qixiang Fang, and Dong Nguyen. Epicurus at SemEval-2023 Task 4: Improving Prediction of Human Values behind Arguments by Leveraging Their Definitions. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 221–229, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Alfio Ferrara, Sergio Picascia, and Elisabetta Rocchetti. Augustine of Hippo at SemEval-2023 Task 4: An Explainable Knowledge Extraction Method to Identify Human Values in Arguments with SuperASKE. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 1044–1053, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Omid Ghahroodi, Mohammad Ali Sadraei Javaheri, Doratossadat Dastgheib, Mahdieh Soleymani Baghshah, Mohammad Hossein Rohban, Hamid R. Rabiee, and Ehsaneddin Asgari. Sina at SemEval-2023 Task 4: A Class-Token Attention-based Model for Human Value Detection. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 2164–2167, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Kenan Hasanaliyev, Kevin Li, Saanvi Chawla, Michael Nath, Rohan Sanda, Justin Wu, William Huang, Daniel Yang, Shane Mion, and Kiran Bhat. Francis Bacon at SemEval-2023 Task 4: Ensembling BERT and GloVe for Value Identification in Arguments. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 2039–2042, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Ethan Heavey, Milton King, and James Hughes. StFX-NLP at SemEval-2023 Task 4: Unsupervised and Supervised Approaches to Detecting Human Values in Arguments. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 205–211, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Hamed Hematian Hemati, Sayed Hesam Alavian, Hossein Sameti, and Hamid Beigy. SUTNLP at SemEval-2023 Task 4: LG-Transformer for Human Value Detection. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 340–346, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Sumire Honda and Sebastian Wilharm. Noam Chomsky at SemEval-2023 Task 4: Hierarchical Similarity-aware Model for Human Value Detection. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 1359–1364, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Siri Venkata Pavan Kumar Kandru, Bhavyajeet Singh, Ankita Maity, Kancharla Aditya Hari, and Vasudeva Varma. Tenzin-Gyatso at SemEval-2023 Task 4: Identifying Human Values behind Arguments Using DeBERTa. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 2062–2066, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Johannes Kiesel, Milad Alshomary, Nailia Mirzakhmedova, Maximilian Heinrich, Nicolas Handke, Henning Wachsmuth, and Benno Stein. SemEval-2023 Task 4: ValueEval: Identification of Human Values behind Arguments. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 2287–2303, July 2023. Association for Computational Linguistics. [bib] [copylink] [data] [demo] [doi] [event] [publisher] [slides]
Long Ma. PAI at SemEval-2023 Task 4: A General Multi-label Classification System with Class-balanced Loss Function and Ensemble Module. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 256–261, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Abdul Jawad Mohammed, Sruthi Sundharram, and Sanidhya Sharma. Friedrich Nietzsche at SemEval-2023 Task 4: Detection of Human Values from Text Using Machine Learning. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 2179–2183, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Erfan Moosavi Monazzah and Sauleh Eetemadi. Prodicus at SemEval-2023 Task 4: Enhancing Human Value Detection with Data Augmentation and Fine-Tuned Language Models. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 2033–2038, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Ajay Narasimha Mopidevi and Hemanth Chenna. Quintilian at SemEval-2023 Task 4: Grouped BERT for Multi-Label Classification. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 1609–1612, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Milad Molazadeh Oskuee, Mostafa Rahgouy, Hamed Babaei Giglou, and Cheryl D. Seals. T.M. Scanlon at SemEval-2023 Task 4: Leveraging Pretrained Language Models for Human Value Argument Mining with Contrastive Learning. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 603–608, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Georgios Papadopoulos, Marko Kokol, Maria Dagioglou, and Georgios Petasis. Andronicus of Rhodes at SemEval-2023 Task 4: Transformer-Based Human Value Detection Using Four Different Neural Network Architectures. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 542–548, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Spencer Paulissen and Caroline Wendt. Lauri Ingman at SemEval-2023 Task 4: A Chain Classifier for Identifying Human Values behind Arguments. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 193–198, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Sougata Saha and Rohini Srihari. Rudolf Christoph Eucken at SemEval-2023 Task 4: An Ensemble Approach for Identifying Human Values from Arguments. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 660–663, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Daniel Schroter, Daryna Dementieva, and Georg Groh. Adam-Smith at SemEval-2023 Task 4: Discovering Human Values in Arguments with Ensembles of Transformer-based Models. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 532–541, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Xianxian Song, Jinhui Zhao, Ruiqi Cao, Linchi Sui, Binyang Li, and Tingyue Guan. Arthur Caplan at SemEval-2023 Task 4: Enhancing Human Value Detection through Fine-tuned Pre-trained Models. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 1953–1959, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Nicolas Stefanovitch, Bertrand de Longueville, and Mario Scharfbillig. TeamEC at SemEval-2023 Task 4: Transformers vs. Low-Resource Dictionaries, Expert Dictionary vs. Learned Dictionary. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 2107–2111, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Ignacio Talavera Cepeda, Amalie Brogaard Pauli, and Ira Assent. S\oren Kierkegaard at SemEval-2023 Task 4: Label-aware Text Classification Using Natural Language Inference. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 1871–1877, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Kushagri Tandon and Niladri Chatterjee. LRL_NC at SemEval-2023 Task 4: The Touche23-George-boole Approach for Multi-Label Classification of Human-Values behind Arguments. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 136–142, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Masaya Tsunokake, Atsuki Yamaguchi, Yuta Koreeda, Hiroaki Ozaki, and Yasuhiro Sogawa. Hitachi at SemEval-2023 Task 4: Exploring Various Task Formulations Reveals the Importance of Description Texts on Human Values. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 1723–1735, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Rob van der Goot. MaChAmp at SemEval-2023 Tasks 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12: On the Effectiveness of Intermediate Training on an Uncurated Collection of Datasets. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 230–245, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Dimitrios Zaikis, Stefanos D. Stefanidis, Konstantinos Anagnostopoulos, and Ioannis Vlahavas. Aristoxenus at SemEval-2023 Task 4: A Domain-Adapted Ensemble Approach to the Identification of Human Values behind Arguments. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 1037–1043, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]
Che Zhang, Ping'an Liu, Zhenyang Xiao, and Haojun Fei. Mao-Zedong at SemEval-2023 Task 4: Label Represention Multi-Head Attention Model with Contrastive Learning-Enhanced Nearest Neighbor Mechanism for Multi-Label Text Classification. In Ritesh Kumar et al., editors, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval'23), pages 426–432, July 2023. Association for Computational Linguistics. [bib] [copylink] [doi] [publisher]

2022

Maria Aba, Munzer Azra, Marco Gallo, Odai Mohammad, Ivan Piacere, Giacomo Virginio, and Nicola Ferro. Aldo Nadi at Touché 2022: Argument Retrieval for Comparative Questions. In Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, and Martin Potthast, editors, Working Notes Papers of the CLEF 2022 Evaluation Labs, number (3180) in CEUR Workshop Proceedings, pages 2904–2918, 2022. [bib] [copylink] [publisher]
Niclas Arnhold, Philipp Rösner, and Tobias Xylander. Quality-Aware Argument Re-Ranking for Comparative Questions. In Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, and Martin Potthast, editors, Working Notes Papers of the CLEF 2022 Evaluation Labs, number (3180) in CEUR Workshop Proceedings, pages 3105–3114, 2022. [bib] [copylink] [publisher]
Sepide Bahrami, Gnana Prakash Goli, Andrea Pasin, Neemol Rajkumari, Mohammad Muzammil Sohail, Paria Tahan, and Nicola Ferro. SEUPD@CLEF: Team INTSEG on Argument Retrieval for Controversial Questions. In Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, and Martin Potthast, editors, Working Notes Papers of the CLEF 2022 Evaluation Labs, number (3180) in CEUR Workshop Proceedings, pages 2919–2932, 2022. [bib] [copylink] [publisher]
Manuel Barusco, Gabriele Del Fiume, Riccardo Forzan, Mario Giovanni Peloso, Nicola Rizzetto, Elham Soleymani, and Nicola Ferro. SEUPD@CLEF: Team Lgtm on Argument Retrieval for Controversial Questions. In Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, and Martin Potthast, editors, Working Notes Papers of the CLEF 2022 Evaluation Labs, number (3180) in CEUR Workshop Proceedings, pages 2933–2955, 2022. [bib] [copylink] [publisher]
Alessandro Benetti, Michele De Togni, Giovanni Foti, Ralton Lacini, Andrea Matteazzi, Enrico Sgarbossa, and Nicola Ferro. SEUPD@CLEF: Team Gamora on Argument Retrieval for Controversial Questions. In Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, and Martin Potthast, editors, Working Notes Papers of the CLEF 2022 Evaluation Labs, number (3180) in CEUR Workshop Proceedings, pages 2956–2968, 2022. [bib] [copylink] [publisher]
Jan Braker, Lorenz Heinemann, and Tobias Schreieder. Aramis at Touché 2022: Argument Detection in Pictures using Machine Learning. In Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, and Martin Potthast, editors, Working Notes Papers of the CLEF 2022 Evaluation Labs, number (3180) in CEUR Workshop Proceedings, pages 2969–2998, 2022. [bib] [copylink] [publisher]
Thilo Brummerloh, Miriam Louise Carnot, Shirin Lange, and Gregor Pfänder. Boromir at Touché 2022: Combining Natural Language Processing and Machine Learning Techniques for Image Retrieval for Arguments. In Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, and Martin Potthast, editors, Working Notes Papers of the CLEF 2022 Evaluation Labs, number (3180) in CEUR Workshop Proceedings, pages 2999–3017, 2022. [bib] [copylink] [publisher]
Lorenzo Cappellotto, Matteo Lando, Daniel Lupu, Marco Mariotto, Riccardo Rosalen, and Nicola Ferro. SEUPD@CLEF: Team 6musk on Argument Retrieval for Controversial Questions by Using Pairs Selection and Query Expansion. In Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, and Martin Potthast, editors, Working Notes Papers of the CLEF 2022 Evaluation Labs, number (3180) in CEUR Workshop Proceedings, pages 3018–3031, 2022. [bib] [copylink] [publisher]
Viktoriia Chekalina and Alexander Panchenko. Retrieving Comparative Arguments using Deep Language Models. In Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, and Martin Potthast, editors, Working Notes Papers of the CLEF 2022 Evaluation Labs, number (3180) in CEUR Workshop Proceedings, pages 3032–3040, 2022. [bib] [copylink] [publisher] [slides]
Alessandro Chimetto, Davide Peressoni, Enrico Sabbatini, Giovanni Tommasin, Marco Varotto, Alessio Zanardelli, and Nicola Ferro. SEUPD@CLEF: Team hextech on Argument Retrieval for Comparative Questions. The importance of adjectives in documents quality evaluation. In Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, and Martin Potthast, editors, Working Notes Papers of the CLEF 2022 Evaluation Labs, number (3180) in CEUR Workshop Proceedings, pages 3041–3054, 2022. [bib] [copylink] [publisher]
Bernardo C. Moreira, Henrique Lopes Cardoso, Bruno Martins, and Fábio Goularte. Team Bruce Banner at Touché 2022: Argument Retrieval for Controversial Questions. In Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, and Martin Potthast, editors, Working Notes Papers of the CLEF 2022 Evaluation Labs, number (3180) in CEUR Workshop Proceedings, pages 3055–3063, 2022. [bib] [copylink] [publisher] [slides]
Pavani Rajula, Chia-Chien Hung, and Simone Paolo Ponzetto. Stacked Model based Argument Extraction and Stance Detection using Embedded LSTM model. In Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, and Martin Potthast, editors, Working Notes Papers of the CLEF 2022 Evaluation Labs, number (3180) in CEUR Workshop Proceedings, pages 3064–3073, 2022. [bib] [copylink] [publisher] [slides]
Ashish Rana, Pujit Golchha, Roni Juntunen, Andreea Coajă, Ahmed Elzamarany, Chia-Chien Hung, and Simone Paolo Ponzetto. LEVIRANK: Limited Query Expansion with Voting Integration for Document Retrieval and Ranking. In Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, and Martin Potthast, editors, Working Notes Papers of the CLEF 2022 Evaluation Labs, number (3180) in CEUR Workshop Proceedings, pages 3074–3089, 2022. [bib] [copylink] [publisher] [slides]
Jan Heinrich Reimer, Johannes Huck, and Alexander Bondarenko. Grimjack at Touché 2022: Axiomatic Re-ranking and Query Reformulation. In Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, and Martin Potthast, editors, Working Notes Papers of the CLEF 2022 Evaluation Labs, number (3180) in CEUR Workshop Proceedings, pages 3090–3104, 2022. [bib] [copylink] [publisher] [slides]
Sebastian Schmidt, Jonas Probst, Bianca Bartelt, and Alexander Hinz. The Pearl Retriever: Two-Stage Retrieval for Pairs of Argumentative Sentences. In Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, and Martin Potthast, editors, Working Notes Papers of the CLEF 2022 Evaluation Labs, number (3180) in CEUR Workshop Proceedings, pages 3115–3130, 2022. [bib] [copylink] [publisher] [slides]
Cuong Vo Ta, Florian Reiner, Immanuel Detten, and Fabian Stöhr. Touché - Task 1 - Team Korg: Finding pairs of argumentative sentences using embeddings. In Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, and Martin Potthast, editors, Working Notes Papers of the CLEF 2022 Evaluation Labs, number (3180) in CEUR Workshop Proceedings, pages 3131–3148, 2022. [bib] [copylink] [publisher]
Nils Wenzlitschke and Pia Sülzle. Using BERT to retrieve relevant and argumentative sentence pairs. In Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, and Martin Potthast, editors, Working Notes Papers of the CLEF 2022 Evaluation Labs, number (3180) in CEUR Workshop Proceedings, pages 3149–3163, 2022. [bib] [copylink] [publisher]
Jerome Würf. Similar but Different: Simple Re-ranking Approaches for Argument Retrieval. In Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, and Martin Potthast, editors, Working Notes Papers of the CLEF 2022 Evaluation Labs, number (3180) in CEUR Workshop Proceedings, pages 3164–3177, 2022. [bib] [copylink] [publisher]
Alexander Bondarenko, Maik Fröbe, Johannes Kiesel, Shahbaz Syed, Timon Gurcke, Meriem Beloucif, Alexander Panchenko, Chris Biemann, Benno Stein, Henning Wachsmuth, Martin Potthast, and Matthias Hagen. Overview of Touché 2022: Argument Retrieval. In Alberto Barrón-Cedeño et al., editors, Experimental IR Meets Multilinguality, Multimodality, and Interaction. 13th International Conference of the CLEF Association (CLEF 2022), Lecture Notes in Computer Science, September 2022. Springer. [bib] [copylink] [ecir invited paper] [event] [slides]

2021

Alexander Bondarenko, Lukas Gienapp, Maik Fröbe, Meriem Beloucif, Yamen Ajjour, Alexander Panchenko, Chris Biemann, Benno Stein, Henning Wachsmuth, Martin Potthast, and Matthias Hagen. Overview of Touché 2021: Argument Retrieval. In K. Selçuk Candan et al., editors, Experimental IR Meets Multilinguality, Multimodality, and Interaction. 12th International Conference of the CLEF Association (CLEF 2021), volume 12880 of Lecture Notes in Computer Science, pages 450-467, September 2021. Springer. [bib] [clef working notes] [copylink] [doi] [ecir invited paper] [event] [publisher] [slides] [video]
Raunak Agarwal, Andrei Koniaev, and Robin Schaefer. Exploring Argument Retrieval for Controversial Questions Using Retrieve and Re-rank Pipelines. In Guglielmo Faggioli et al., editors, Working Notes Papers of the CLEF 2021 Evaluation Labs, volume 2936 of CEUR Workshop Proceedings, pages 2285–2291, September 2021. CEUR-WS.org. [bib] [copylink] [publisher]
Christopher Akiki, Maik Fröbe, Matthias Hagen, and Martin Potthast. Learning to Rank Arguments with Feature Selection. In Guglielmo Faggioli et al., editors, Working Notes Papers of the CLEF 2021 Evaluation Labs, volume 2936 of CEUR Workshop Proceedings, pages 2292–2301, September 2021. CEUR-WS.org. [bib] [copylink] [publisher]
Marco Alecci, Tommaso Baldo, Luca Martinelli, and Elia Ziroldo. Development of an IR System for Argument Search. In Guglielmo Faggioli et al., editors, Working Notes Papers of the CLEF 2021 Evaluation Labs, volume 2936 of CEUR Workshop Proceedings, pages 2302–2318, September 2021. CEUR-WS.org. [bib] [copylink] [publisher]
Alaa Alhamzeh, Mohamed Bouhaouel, Elöd Egyed-Zsigmond, and Jelena Mitrović. DistilBERT-based Argumentation Retrieval for Answering Comparative Questions. In Guglielmo Faggioli et al., editors, Working Notes Papers of the CLEF 2021 Evaluation Labs, volume 2936 of CEUR Workshop Proceedings, pages 2319–2330, September 2021. CEUR-WS.org. [bib] [copylink] [publisher] [slides]
Andrea Cassetta, Alberto Piva, and Enrico Vicentini. Document retrieval task on controversial topic with Re-Ranking approach. In Guglielmo Faggioli et al., editors, Working Notes Papers of the CLEF 2021 Evaluation Labs, volume 2936 of CEUR Workshop Proceedings, pages 2331–2353, September 2021. CEUR-WS.org. [bib] [copylink] [publisher]
Viktoriia Chekalina and Alexander Panchenko. Retrieving Comparative Arguments using Ensemble Methods and BERT. In Guglielmo Faggioli et al., editors, Working Notes Papers of the CLEF 2021 Evaluation Labs, volume 2936 of CEUR Workshop Proceedings, pages 2354–2365, September 2021. CEUR-WS.org. [bib] [copylink] [publisher] [slides]
Lukas Gienapp. Quality-aware Argument Retrieval with Topical Clustering. In Guglielmo Faggioli et al., editors, Working Notes Papers of the CLEF 2021 Evaluation Labs, volume 2936 of CEUR Workshop Proceedings, pages 2366–2373, September 2021. CEUR-WS.org. [bib] [copylink] [publisher]
Tommaso Green, Luca Moroldo, and Alberto Valente. Exploring BERT Synonyms and Quality Prediction for Argument Retrieval. In Guglielmo Faggioli et al., editors, Working Notes Papers of the CLEF 2021 Evaluation Labs, volume 2936 of CEUR Workshop Proceedings, pages 2374–2388, September 2021. CEUR-WS.org. [bib] [copylink] [publisher]
Daniel Helmrich, Denis Streitmatter, Fionn Fuchs, and Maximilian Heykeroth. Touché Task 2: Comparative Argument Retrieval A document-based search engine for answering comparative questions. In Guglielmo Faggioli et al., editors, Working Notes Papers of the CLEF 2021 Evaluation Labs, volume 2936 of CEUR Workshop Proceedings, pages 2389–2402, September 2021. CEUR-WS.org. [bib] [copylink] [publisher]
Thi Kim Hanh Luu and Jan-Niklas Weder. Argument Retrieval for Comparative Questions based on independent features. In Guglielmo Faggioli et al., editors, Working Notes Papers of the CLEF 2021 Evaluation Labs, volume 2936 of CEUR Workshop Proceedings, pages 2403–2416, September 2021. CEUR-WS.org. [bib] [copylink] [publisher]
Alina Mailach, Denise Arnold, Stefan Eysoldt, and Simon Kleine. Exploring Document Expansion for Argument Retrieval. In Guglielmo Faggioli et al., editors, Working Notes Papers of the CLEF 2021 Evaluation Labs, volume 2936 of CEUR Workshop Proceedings, pages 2417–2422, September 2021. CEUR-WS.org. [bib] [copylink] [publisher]
Edoardo Raimondi, Marco Alessio, and Nicola Levorato. A Search Engine System for Touché Argument Retrieval task to answer Controversial Questions. In Guglielmo Faggioli et al., editors, Working Notes Papers of the CLEF 2021 Evaluation Labs, volume 2936 of CEUR Workshop Proceedings, pages 2423–2440, September 2021. CEUR-WS.org. [bib] [copylink] [publisher]
Kevin Ros, Carl Edwards, Heng Ji, and Chengxiang Zhai. Team Skeletor at Touché 2021: Argument Retrieval and Visualization for Controversial Questions. In Guglielmo Faggioli et al., editors, Working Notes Papers of the CLEF 2021 Evaluation Labs, volume 2936 of CEUR Workshop Proceedings, pages 2441–2454, September 2021. CEUR-WS.org. [bib] [copylink] [publisher] [slides]
Ekaterina Shirshakova and Ahmad Wattar. Thor at Touché 2021: Argument Retrieval for Comparative Questions. In Guglielmo Faggioli et al., editors, Working Notes Papers of the CLEF 2021 Evaluation Labs, volume 2936 of CEUR Workshop Proceedings, pages 2455–2462, September 2021. CEUR-WS.org. [bib] [copylink] [publisher]

2020

Alexander Bondarenko, Maik Fröbe, Meriem Beloucif, Lukas Gienapp, Yamen Ajjour, Alexander Panchenko, Chris Biemann, Benno Stein, Henning Wachsmuth, Martin Potthast, and Matthias Hagen. Overview of Touché 2020: Argument Retrieval. In Avi Arampatzis et al., editors, Experimental IR Meets Multilinguality, Multimodality, and Interaction. 11th International Conference of the CLEF Association (CLEF 2020), volume 12260 of Lecture Notes in Computer Science, pages 384-395, September 2020. Springer. [bib] [clef working notes] [copylink] [doi] [event] [publisher] [slides] [video]
Tinsaye Abye, Tilmann Sager, and Anna Juliane Triebel. An Open-Domain Web Search Engine for Answering Comparative Questions. In Linda Cappellato, Carsten Eickhoff, Nicola Ferro, and Aurélie Névéol, editors, Working Notes Papers of the CLEF 2020 Evaluation Labs, volume 2696 of CEUR Workshop Proceedings, September 2020. CEUR-WS.org. [bib] [copylink] [publisher]
Christopher Akiki and Martin Potthast. Exploring Argument Retrieval with Transformers. In Linda Cappellato, Carsten Eickhoff, Nicola Ferro, and Aurélie Névéol, editors, Working Notes Papers of the CLEF 2020 Evaluation Labs, volume 2696 of CEUR Workshop Proceedings, September 2020. CEUR-WS.org. [bib] [copylink] [publisher] [slides]
Maximilian Bundesmann, Lukas Christ, and Matthias Richter. Creating an Argument Search Engine for Online Debates. In Linda Cappellato, Carsten Eickhoff, Nicola Ferro, and Aurélie Névéol, editors, Working Notes Papers of the CLEF 2020 Evaluation Labs, volume 2696 of CEUR Workshop Proceedings, September 2020. CEUR-WS.org. [bib] [copylink] [publisher]
Viktoria Chekalina and Alexander Panchenko. Retrieving Comparative Arguments using Deep Pre-trained Language Models and NLU. In Linda Cappellato, Carsten Eickhoff, Nicola Ferro, and Aurélie Névéol, editors, Working Notes Papers of the CLEF 2020 Evaluation Labs, volume 2696 of CEUR Workshop Proceedings, September 2020. CEUR-WS.org. [bib] [copylink] [publisher] [slides]
Lorik Dumani and Ralf Schenkel. Ranking Arguments by Combining Claim Similarity and Argument Quality Dimensions. In Linda Cappellato, Carsten Eickhoff, Nicola Ferro, and Aurélie Névéol, editors, Working Notes Papers of the CLEF 2020 Evaluation Labs, volume 2696 of CEUR Workshop Proceedings, September 2020. CEUR-WS.org. [bib] [copylink] [publisher] [slides]
Saeed Entezari and Michael Völske. Argument Retrieval Using Deep Neural Ranking Models. In Linda Cappellato, Carsten Eickhoff, Nicola Ferro, and Aurélie Névéol, editors, Working Notes Papers of the CLEF 2020 Evaluation Labs, volume 2696 of CEUR Workshop Proceedings, September 2020. CEUR-WS.org. [bib] [copylink] [publisher]
Johannes Huck. Development of a Search Engine to Answer Comparative Queries. In Linda Cappellato, Carsten Eickhoff, Nicola Ferro, and Aurélie Névéol, editors, Working Notes Papers of the CLEF 2020 Evaluation Labs, volume 2696 of CEUR Workshop Proceedings, September 2020. CEUR-WS.org. [bib] [copylink] [publisher]
Mahsa S. Shahshahani and Jaap Kamps. University of Amsterdam at CLEF 2020. In Linda Cappellato, Carsten Eickhoff, Nicola Ferro, and Aurélie Névéol, editors, Working Notes Papers of the CLEF 2020 Evaluation Labs, volume 2696 of CEUR Workshop Proceedings, September 2020. CEUR-WS.org. [bib] [copylink] [publisher]
Bjarne Sievers. Question Answering for Comparative Questions with GPT-2. In Linda Cappellato, Carsten Eickhoff, Nicola Ferro, and Aurélie Névéol, editors, Working Notes Papers of the CLEF 2020 Evaluation Labs, volume 2696 of CEUR Workshop Proceedings, September 2020. CEUR-WS.org. [bib] [copylink] [publisher]
Christian Staudte and Lucas Lange. SentArg: A Hybrid Doc2Vec/DPH Model with Sentiment Analysis Refinement. In Linda Cappellato, Carsten Eickhoff, Nicola Ferro, and Aurélie Névéol, editors, Working Notes Papers of the CLEF 2020 Evaluation Labs, volume 2696 of CEUR Workshop Proceedings, September 2020. CEUR-WS.org. [bib] [copylink] [publisher]
Alexander Bondarenko, Matthias Hagen, Martin Potthast, Henning Wachsmuth, Meriem Beloucif, Chris Biemann, Alexander Panchenko, and Benno Stein. Touché: First Shared Task on Argument Retrieval. In Pablo Castells et al., editors, Advances in Information Retrieval. 42nd European Conference on IR Research (ECIR 2020), volume 12036 of Lecture Notes in Computer Science, pages 517-523, April 2020. Springer. [bib] [copylink] [doi] [event] [publisher]