Call for Papers


Important Dates (Time zone: Anywhere on Earth)

  • Submission of Long, Short, Best of 2022 Labs Papers:
    12 21 May, 2023 (extended)
  • Notification of Acceptance: 9 June, 2023
  • Camera Ready Copy due: 30 June, 2023
  • Conference: 18-21 September, 2023

Aim and Scope

The CLEF Conference addresses all aspects of Information Access in any modality and language. The CLEF conference includes presentation of research papers and a series of workshops presenting the results of lab-based comparative evaluation benchmarks.

CLEF 2023 is the 14th CLEF conference continuing the popular CLEF campaigns which have run since 2000 contributing to the systematic evaluation of information access systems, primarily through experimentation on shared tasks. The CLEFconference has a clear focus on experimental IR as carried out within evaluation forums (e.g., CLEF Labs, TREC, NTCIR, FIRE, MediaEval, RomIP, SemEval, and TAC) with special attention to the challenges of multimodality, multilinguality, and interactive search also considering specific classes of users as children, students, impaired users in different tasks (e.g., academic, professional, or everyday-life). We invite paper submissions on significant new insights demonstrated on IR test collections, on analysis of IR test collections and evaluation measures, as well as on concrete proposals to push the boundaries of the Cranfield style evaluation paradigm.

All submissions to the CLEF main conference will be reviewed on the basis of relevance, originality, importance, and clarity. CLEF welcomes papers that describe rigorous hypothesis testing regardless of whether the results are positive or negative. CLEF also welcomes past runs/results/data analysis and new data collections. Methods are expected to be written so that they are reproducible by others, and the logic of the research design is clearly described in the paper. The conference proceedings will be published in the Springer Lecture Notes in Computer Science (LNCS).


Topics

Relevant topics for the CLEF 2023 Conference include but are not limited to:

  • Information Access in any language or modality: information retrieval, image retrieval, question answering, search interfaces and design, infrastructures, etc.
  • Analytics for Information Retrieval: theoretical and practical results in the analytics field that are specifically targeted for information access data analysis, data enrichment, etc.
  • User studies either based on lab studies or crowdsourcing.
  • Past results/run deep analysis both statistically and fine grain based.
  • Evaluation initiatives: conclusions, lessons learned, impact and projection of any evaluation initiative after completing their cycle.
  • Evaluation: methodologies, metrics, statistical and analytical tools, component based, user groups and use cases, ground-truth creation, impact of multilingual/multicultural/multimodal differences, etc.
  • Technology transfer: economic impact/sustainability of information access approaches, deployment and exploitation of systems, use cases, etc.
  • Interactive Information Retrieval evaluation: the interactive evaluation of information retrieval systems using user-centered methods, evaluation of novel search interfaces, novel interactive evaluation methods, simulation of interaction, etc.
  • Specific application domains: information access and its evaluation in application domains such as cultural heritage, digital libraries, social media, health information, legal documents, patents, news, books, and in the form of text, audio and/or image data.
  • New data collection: presentation of new data collection with potential high impact on future research, specific collections from companies or labs, multilingual collections.
  • Work on data from rare languages, collaborative, social data.

Format

Authors are invited to electronically submit original papers, which have not been published and are not under consideration elsewhere, using the LNCS proceedings format:

http://www.springer.com/it/computer-science/lncs/conference-proceedings-guidelines

Two types of papers are solicited:

  • Long papers: 12 pages max (including references). Aimed to report complete research works.
  • Short papers: 6 pages max (including references). Position papers, new evaluation proposals, developments and applications, etc.

Review Process

Authors of long and short papers are asked to submit the following TWO versions of their manuscript:

Methodology version: This version does NOT report anything related to the results of the study. At this stage, the manuscripts will be evaluated based on the importance of the problem addressed and the soundness of the methodology. Manuscripts can include an introduction, description of the proposed methodology and datasets used. However, there should be no result and discussion sections. The authors should also remove mentions of results in the included sections (e.g., abstract, introduction)

Experimental version: This is the full version of the manuscript that contains all the sections of the paper including the experiments and results.

Papers will be peer-reviewed by 3 members of the program committee in two stages. At the first stage, the members will review the methodology version of the manuscripts based on originality and methodology. At the second stage, the full version of the manuscripts that passed from the first sage will be reviewed. Selection will be based on originality, clarity, and technical quality.

The deadline for the submission of both versions is 12th of May.


Paper Submission

Papers should be submitted in PDF format to the following address:

https://easychair.org/my/conference?conf=clef2023

  • Submit the methodology version at the Methodology Track
  • Submit the experimental version at the Experimental Track

Organization

General Chairs

Evangelos Kanoulas, Univ. of Amsterdam, the Netherlands
Theodora Tsikrika, I.T.I., CERTH, Greece
Stefanos Vrochidis, I.T.I., CERTH, Greece
Avi Arampatzis, Democritus University of Thrace, Greece

Program Chairs

Anastasia Giachanou, Utrecht University, the Netherlands
Dan Li, Elsevier

Evaluation Lab Chairs

Mohammad Aliannejadi, Univ. of Amsterdam, the Netherlands
Michalis Vlachos, University of Lausanne, Switzerland

Lab Mentorship Chair

Jian-Yun Nie, University of Montreal, Canada