SOLARIS News

From Tools to Actors: SOLARIS at the Forefront of the AI and Trust Debate

Members of the SOLARIS project consortium took part in the 24th biennial international conference of the Society for Philosophy and Technology “The Intimate Technological Revolution”, dedicated to the societal and ethical issues of modern technology and the way it permeates every aspect of our lives.

unnamed

The contribution of SOLARIS project members to the major international conference advanced the project’s mission to understand and counter AI-driven disinformation by addressing its epistemological, ethical, and institutional foundations.

Presentations by SOLARIS team members explored how emerging generative AI systems challenge existing models of trust, meaning-making, and regulation. A central theme across all contributions was the need to rethink not just how we regulate AI, but how we relate to it—as individuals, as institutions, and as democratic societies.

The first presentation titled “Synthetic socio-technical systems: poiêsis as meaning making” argued that generative AI systems are no longer just tools but active participants in meaning-making. The SOLARIS project members introduced the term synthetic socio-technical systems to describe this shift, highlighting how we now interact with AI, not just through it. Drawing on the concept of poiêsis, they emphasized the need for an epistemology-cum-ethics approach to guide AI policy.

WhatsApp Image 2025-06-25 at 16.32.06_7ac8fd13

The second presentation was titled “Human Trust and Artificial Intelligence: Is an alignment possible?” It examined whether human trust can apply to AI, arguing that in human-machine relationships, reliability and trustworthiness are more appropriate concepts. Rather than trusting the AI itself, users place trust in the people and systems behind it. As generative AI becomes more integrated into society, the team called for a reassessment of how trust is defined and built in this new context.

The third presentation labeled “A network approach to public trust in generative AI” argued that while the EU’s Trustworthy AI framework supports ethical AI development, it overlooks the broader social role of generative AI as an active participant in public discourse. The SOLARIS project members proposed a network approach to public trust, grounded in Actor-Network Theory, where trust is shaped by interactions among a wide range of actors—developers, media, institutions, and citizens. Trust in AI, they suggested, depends not just on technical systems but on the credibility of the entire information environment. In an era of post-truth politics, restoring trust in AI requires rebuilding trust in the democratic and institutional systems that surround it.

Together, these presentations reinforced SOLARIS’s commitment to tackling disinformation not just as a technological problem, but as a complex socio-technical challenge. By addressing the deeper dynamics of trust and meaning in the AI age, the SOLARIS project continues to shape European thinking on how to secure democratic values in an increasingly synthetic information landscape.

This project is funded by the European Union under Grant agreement No. 101094665. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. The European Union cannot be held responsible for them.