QuaLLM-KG 2026

1st International Workshop on Quality in Large Language Models and Knowledge Graphs

In conjunction with EDBT/ICDT 2026, Tampere, Finland

(March 24, 2026)

About the QuaLLM-KG Workshop

LLM
Quality
KG

The rapid progress of knowledge graphs (KGs) and large language models (LLMs) has transformed data science, data management, and artificial intelligence. Knowledge graphs have become a central paradigm for representing structured information, while LLMs provide unprecedented capabilities in natural language understanding and generation.

Their success depends heavily on the quality of the underlying data, models, and processes. KGs suffer from incompleteness, incorrect links, outdated facts, and lack of provenance; LLMs suffer from hallucinations, bias, lack of factual grounding, and reproducibility issues.

When KGs and LLMs are combined—KG-grounded LLMs or LLM-assisted KG construction—the quality challenges multiply and require novel evaluation, integration, and trustworthiness solutions.

The workshop brings together communities across databases, AI, NLP, knowledge representation, information retrieval, data quality, and data governance to foster research on ensuring quality in KGs, LLMs, and their hybrid systems.

QuaLLM-KG aims to bridge theoretical, methodological, and empirical perspectives while providing demonstrations and practical case studies to give attendees hands-on understanding of real-world solutions for LLM/KG quality.

Topics of Interest

Quality in Knowledge Graphs

  • Accuracy, consistency, completeness, freshness
  • Schema validation, constraint checking, error detection
  • Entity resolution, link prediction, ontology alignment
  • Provenance, explainability, trust in KG data
  • KG quality in dynamic and large-scale settings

Quality in Large Language Models

  • Hallucination reduction & factual grounding
  • Bias detection and mitigation
  • Metrics & benchmarks for quality assessment
  • Uncertainty estimation, calibration, interpretability

Synergies Between KGs and LLMs

  • KG-based grounding and fact-checking for LLMs
  • LLM-based KG enrichment, extraction, entity linking
  • Quality-driven prompting and fine-tuning
  • Hybrid KG–LLM architectures for quality assurance
  • Evaluation frameworks for integration and consistency

Benchmarks and Evaluation Frameworks

  • Datasets and metrics for KG & LLM quality
  • Tools for monitoring, validation, maintenance
  • Reproducibility, transparency, responsible AI

Applications and Case Studies

  • Scientific, industrial, enterprise use cases
  • Quality at scale
  • Human-in-the-loop quality control

Go to top ↑

Submission

The workshop solicits papers in the following categories:

  • Research Papers: Propose new approaches, theories, or techniques related to ensuring or evaluating quality in KGs and LLMs. These papers should make substantial theoretical or empirical contributions.
  • System Papers: Present new systems, frameworks, or platforms for managing, assessing, or improving quality in KGs and/or LLMs.
  • Experiments and Analysis Papers: Provide empirical evaluations of existing methods, offering insights into quality issues, benchmarks, reproducibility, or unexpected behaviors. Comparative studies in novel settings are welcome.
  • Application Papers: Describe practical experiences applying KGs, LLMs, or hybrid methods to real-world domains (e.g., healthcare, finance, scientific research), with an emphasis on quality management.
  • Vision Papers: Identify emerging challenges and future research directions, proposing new visions for quality-aware KGs and LLMs that could influence data management and AI research.
  • Demo Papers: Showcase innovative tools, platforms, or applications for assessing or improving the quality of KGs or LLMs. Interactive or WOW-effect demonstrations are particularly encouraged.

The workshop will accept full papers (up to 8 pages, excluding references) and short papers describing work in progress, systems, demos/systems/applications, or vision/innovative ideas (up to 4 pages, excluding references).

Submission deadline: January 15th, 2026
Notification of acceptance: February 8th, 2026
Camera-ready: February 20th, 2026

Authors must format their submissions according to the CEUR-WS proceedings template, and all submissions in PDF format should be submitted to a link that will be provided soon.

Accepted papers will be published in the CEUR Workshop proceedings (CEUR-WS.org).

Go to top ↑

Organizers

Soror Sahri, Université Paris Cité, France

Sven Groppe, University of Lübeck, Germany

Farah Benamara, University of Toulouse & CNRS IPAL, Singapore

Go to top ↑

Program Committee

  • Asma Abboura, University of Hassiba Benbouali, Algeria
  • Hanieh Khorashadizadeh, University of Lübeck, Germany
  • Hazar Harmouch, University of Amsterdam, Netherlands
  • Imane Hocine, University of Luxembourg, Luxembourg
  • Jesualdo Fernandez Breis, Universidad de Murcia, IMIB-Arrixaca, Spain
  • Mohammad Sadoghi, University of California, USA
  • Mourad Ouzzani, Qatar Computing Research Institute, HBQU, Qatar
  • Nathalie Aussenac, IRIT–CNRS, France
  • Raphaël Troncy, EURECOM, France
  • Rinaldo Lima Federal, Rural University of Pernambuco, Brazil
  • Sanju Tiwari, Sharda University, India
  • Valter Uotila, University of Helsinki, Finland

Go to top ↑