Webinar: Enhancing Factual Reliability in Large Language Models: Data Curation, Tool Learning, and Knowledge Graphs

April 14, 10:00-11:00 (CET)

Alexander Weber, Michael Fromm, Nicolas Flores-Herr (Fraunhofer Institute for Intelligent Analysis and Information Systems), Maaike de Boer (TNO)

An photo collage of four researchers with a blue and light blue backdrop

How can we make large language models more factually reliable? Can better data, external tools, and structured knowledge help reduce hallucinations? This TrustLLM webinar will focus on improving the factual trustworthiness of LLMs.

As large language models are increasingly used in real‑world and high‑stakes settings, their tendency to produce fluent but incorrect information remains a major challenge. This webinar presents three main contributions of the ongoing work from TrustLLM Work Package 3, which tackles factual reliability through three methodological approaches: data curation, tool learning, and structured knowledge extraction. Together, these three perspectives show how better data, external tool integration, and structured knowledge representations can jointly strengthen the factual reliability and trustworthiness of large language models.

The first topic introduces JQL (Judging Quality across Languages). JQL is a scalable method for curating high‑quality multilingual datasets by distilling LLM‑based annotations into lightweight models built on cross‑lingual embeddings. This approach demonstrates how systematic data curation across languages can directly improve the factual grounding of LLMs.

The second contribution explores how structured tool use can help anchor model outputs in real‑world information. Tool learning enables LLMs to interact with external systems—such as retrievers or specialized tools—allowing them to verify facts and reason over up‑to‑date sources rather than relying solely on internal representations.

Finally, we explore knowledge graph construction and ontology learning as a way to enhance factual consistency. By comparing single‑step and multi‑step reasoning strategies, this work investigates how LLMs can more reliably extract structured knowledge from text, supporting downstream reasoning and verification tasks.

Please note that this webinar will be recorded!

More from TrustLLM