The vision behind TrustLLM

April 14, 10-11 (CET)

Portrait of Fredrik Heintz

Fredrik Heintz is a Professor of Computer Science at Linköping University and a key figure in managing the TrustLLM project. With a diverse research focus, including trustworthy AI, Professor Heintz brings a wealth of expertise to the TrustLLM project. We spoke with Professor Heintz about the vision behind TrustLLM, and he shared his insights into the challenges and innovations shaping the future of AI in Europe, emphasising the importance of collaboration and maintaining autonomy in this rapidly evolving field.

Professor Heintz, tell us about the vision behind TrustLLM.

– The vision of TrustLLM is to create trustworthy European large language models (LLMs), with a focus on covering underrepresented languages. The main goal is to develop an open, trustworthy, and factual LLM, starting with Germanic languages. This will lay the groundwork for an advanced and open ecosystem that supports the next generation of modular, trustworthy, sustainable, and democratised European LLMs. The TrustLLM project and its ecosystem aim to enhance and support context-aware human-machine interaction across diverse applications.

How do you envision the impact of this research on the field and society at large?

– The models we are developing might not be the most powerful in the world, but they should be trustworthy. This means that we are focusing on what makes these models more reliable, but also legal. It turns out that it is both unclear and challenging to make large language models trustworthy and legal. This is currently our main focus, while at the same time making sure that we have the most competent and useful models that we can.

By focusing on Germanic languages, we aim to create a blueprint for future activities in other language families. This initiative will help secure Europe’s sovereignty in crucial AI technologies, fostering a novel framework for European collaboration on LLMs. We hope to contribute to securing European independence in important AI technologies and create a new framework for European collaboration on LLMs. Developing our own large language models is essential for Europe. We must lead through innovation, not just regulation. TrustLLM is an important step towards large-scale investments in Europe to make sure we take an active part and push our values in the form of new language models.

What challenges do you anticipate, and how do you plan to address them?

– Trustworthy and legal data collecting in all the Germanic languages is the main challenge. Dealing with low resource languages is another.

How can we make large language models more factually reliable? Can better data, external tools, and structured knowledge help reduce hallucinations? This TrustLLM webinar will focus on improving the factual trustworthiness of LLMs.

As large language models are increasingly used in real‑world and high‑stakes settings, their tendency to produce fluent but incorrect information remains a major challenge. This webinar presents three main contributions of the ongoing work from TrustLLM Work Package 3, which tackles factual reliability through three methodological approaches: data curation, tool learning, and structured knowledge extraction. Together, these three perspectives show how better data, external tool integration, and structured knowledge representations can jointly strengthen the factual reliability and trustworthiness of large language models.

The first topic introduces JQL (Judging Quality across Languages). JQL is a scalable method for curating high‑quality multilingual datasets by distilling LLM‑based annotations into lightweight models built on cross‑lingual embeddings. This approach demonstrates how systematic data curation across languages can directly improve the factual grounding of LLMs.
The second contribution explores how structured tool use can help anchor model outputs in real‑world information. Tool learning enables LLMs to interact with external systems—such as retrievers or specialized tools—allowing them to verify facts and reason over up‑to‑date sources rather than relying solely on internal representations.
Finally, we explore knowledge graph construction and ontology learning as a way to enhance factual consistency. By comparing single‑step and multi‑step reasoning strategies, this work investigates how LLMs can more reliably extract structured knowledge from text, supporting downstream reasoning and verification tasks.

Please note that this webinar will be recorded!

More from TrustLLM