
AIXPERT addresses some of the most fundamental challenges facing modern AI, including explainability, transparency, accountability, autonomy, and robustness. As AI systems increasingly influence critical decisions across society and industry, AIXPERT responds to the urgent need for trustworthy, human-centric AI that can be understood, monitored, and governed by its users.
The project proposes a next-generation, architecture-agnostic AI framework designed to make intelligent systems more understandable, reliable, and socially acceptable. AIXPERT’s vision is to advance trustworthy AI by developing a situation-aware, adaptable AI-agentic platform that can encapsulate and coordinate diverse AI models, regardless of their underlying architecture. By combining multi-agent systems, generative AI and explainable multimodal foundation models, the platform enables complex AI systems to provide clear, context-sensitive explanations of their behavior while remaining flexible and scalable.
A core innovation of AIXPERT lies in the integration of real-time human feedback into AI decision-making processes. This approach ensures that AI systems are not only powerful and autonomous but also transparent, accountable, and aligned with human values, fostering greater trust and usability across different stakeholder groups.
The AIXPERT platform follows a multi-layered architecture, with each layer addressing specific aspects of AI interpretability and explainability. The Agent–World Interface Layer defines and coordinates AI agents, their situational awareness, and their interaction with real-world knowledge sources. The Dialogue Mediation Layer manages communication between users and agents, as well as among agents themselves, enabling transparent and meaningful human–AI interaction. At the foundation, the Cognitive Foundation Layer provides core AI capabilities through explainable multimodal foundation models capable of handling text, image, and audio data while addressing bias, inclusiveness, and robustness.
The effectiveness and applicability of the AIXPERT framework will be verified and validated through five real-world pilot demonstrations in diverse domains, including healthcare, recruitment services, manufacturing, educational robotics, and the creative industries. These pilots will evaluate the introduced innovations through simulations and controlled experimentation, highlighting both societal and economic impact and demonstrating the value of explainable and trustworthy AI in operational environments.
AIXPERT pursues a set of interrelated objectives aimed at advancing the state of the art in trustworthy AI:
- To provide an adaptable, situation-aware AI-agentic platform for explainable, accountable, and transparent AI systems.
- To develop a comprehensive framework for collectively assessing AI trustworthiness, covering explainability, transparency, accountability, and autonomy across diverse working conditions.
- To design and integrate explainable multimodal foundation models that enhance AI transparency, performance, and social equity across applications.
- To verify and validate the proposed framework through five representative pilots, assessing its effectiveness in real-world use cases.
- To raise awareness and ensure sustainability of the project’s results through dissemination activities, policy and regulatory recommendations, and exploitation frameworks.
Within the AIXPERT consortium, ITML leads Task 6.1, focusing on the creation of a No-Code framework for dynamic AI agent creation, combination, and configuration. This work enables both technical and non-technical users to design, customise, and deploy explainable AI agents through an intuitive, drag-and-drop interface supported by a modular and flexible backend architecture. Special emphasis is placed on transparency and explainability, with integrated monitoring and feedback mechanisms that allow users to understand how configuration choices affect agent behaviour and decision-making.
Beyond leading T6.1, ITML actively supports the project’s broader technical activities, contributing to system integration, validation, scalability, and robustness, and ensuring that the AIXPERT platform delivers practical, user-friendly, and trustworthy AI solutions across diverse application domains.
For more information, visit the official project website: https://aixpert-project.eu/