CLEF promotes the systematic evaluation of information access systems, primarily through experimentation on shared tasks.

CLEF 2023 consists of a set of 13 Labs designed to test different aspects of multilingual and multimedia IR systems:

  1. BioASQ: Large-scale Biomedical Semantic Indexing and Question Answering
  2. CheckThat!: Predicting the Subjectivity, the Political Leaning, and the Factuality of Reporting of News Articles and Media Outlets
  3. DocILE: Document Information Localization and Extraction
  4. eRisk: Early Risk Prediction on the Internet
  5. EXIST: sEXism Identification in Social neTworks
  6. iDPP: Intelligent Disease Progression Prediction
  7. ImageCLEF: Multimedia Retrieval in CLEF
  8. JOKER: Automatic Wordplay Analysis
  9. LifeCLEF: Multimedia Retrieval in Nature
  10. LongEval: Longitudinal Evaluation of Model Performance
  11. PAN: Stylometry and Digital Text Forensics
  12. SimpleText: Automatic Simplification of Scientific Texts
  13. Touché + Online Democracy

Labs Publications:

  • Lab Overviews published in LNCS Proceedings
  • Labs Working Notes published in CEUR-WS Proceedings
  • Best of Lab Papers will be nominated for CLEF 2023 submission to LNCS proceedings

Labs Participation:

Important Dates:

  • Labs registration opens: tba