Deliverables

Work package 2: Definition & Mapping of GANs in social media

 

This report offers a broad examination of the challenges posed by AI-generated content from various disciplinary angles. It delves into topics such as the evolution of Adversarial Generative Network types, the policy implications of AI technologies, their impact on international relations, the semiotics of deepfakes and fake news, cross-legal approaches to regulation, and a case study focusing on the distribution of deepfakes in Bulgarian media. While it provides an introductory overview, a more detailed analysis is planned for Deliverable D2.2. 

 

This report covers various types of neural networks and explores the ethical issues surrounding AI. The theoretical framework of the project is introduced, focusing on understanding social media users' perceptions and trust in AI-generated content through a philosophy of technology lens and Actor-Network Theory approach. The report also delves into a semiotic analysis of deepfakes, highlighting their contextual and linguistic aspects. Additionally, a valid scale is developed to measure the perceived trustworthiness of GANs, and the Open Science approach is adopted for research data management. Finally, the report discusses the positive potential of generative AI and its implications for research practices under the new EU framework.

 

Work package 4: Designing regulatory innovations for infodemic risks mitigation

 

This deliverable investigates the political risks and negative implications due to the circulation of GANs technology by presenting, in particular, an interdisciplinary mapping of the geopolitical consequences, both at a national and international level, connected to the infodemic. This mapping process serves to define some strategic elements in understanding the infodemic phenomenon: first of all, the deliverable identifies the political actors involved in the diffusion and circulation of GANs, highlighting their interests for the purposes of controlling public opinion and the ideological orientation of the users. Furthermore, the impact of the infodemic in individual state communities and its effects on the international geopolitical structure is presented.

 

This deliverable details work conducted in preparing for the regulatory innovation framework that is aimed at mitigating risks stemming from the circulation and dissemination of AI-generated disinformation and which will be tested during Use Case 2 in M26 (April 2025). Structured into four major sections, the deliverable provides an overview of current policies, strategies and legislation aimed at tackling AI-generated disinformation within Europe with the goal of identifying best practices and determining significant considerations for Use Case 2. Section 1 provides an overview of European-level mitigation strategies including those related to AI governance, cybersecurity, self-regulation of online platforms, and media literacy and education initiatives. Section 2 then explores similar mitigation strategies that are implemented at the national level and attempts to provide a diverse picture of these approaches taken by countries in different European regions, particularly Western Europe, Western Balkans and the Baltic States. Section 3 then briefly explores possible technical detection strategies before presenting the SOLARIS method of anomaly detection which may be implemented into Use Case 2. Finally, Section 4 draws insights from these previous sections in order to propose considerations for Use Case 2 and the ongoing development of the regulatory innovation framework.

 

Work package 5: Use cases for understanding GANs' 􀀉􀀊impact on politics 􀀎􀀍􀀯􀀐􀀊􀀘􀀚􀀝􀀍􀀯􀀋􀀍􀀮

􀀥􀀃

Deliverable D5.2 presents the results of co-creative evaluation of the three SOLARIS Use Cases, building upon the methodological foundation established in D5.1. It provides an integrated overview of outcomes for each Use Case, encompassing both experimental findings and technical progress.

UC1 focuses on the psychological and behavioral impact of deepfakes, examining how video quality, media literacy, and personal attitudes affect detection, emotional engagement, and subsequent sharing behaviors. Experimental evidence reveals that higher technical video quality reduces detectability, increases viewer liking, and, under certain conditions, shapes attitudes towards topical issues such as climate change and immigration. Critically, dissemination risks are amplified when deepfake content is perceived as trustworthy, especially among individuals with lower media literacy or stronger positive attitudes toward the depicted figures. From a technical standpoint, UC1 validates the use of the Perceived Deepfake Trustworthiness Questionnaire (PDTQ) across three languages and contextualizes the deployment of GANs for generating and manipulating experimental stimuli.

UC2 unfolded through two complementary activities that examined how professional newsrooms confront synthetic media threats. UC2 shifted attention to the collective professional environment of journalists and editors, recognising newsrooms as both vulnerable targets and crucial gatekeepers of democratic resilience. The first activity, held in Rome at ANSA in May 2025, combined a roundtable discussion with direct observation of newsroom practices, while the second, an online focus group in September, further explored journalists’ strategies, vulnerabilities, and strengths in handling synthetic content. By embedding AI-generated videos into editorial routines under breaking-news conditions, while ensuring ethical standards through journalists’ prior awareness of their artificial nature, UC2 balanced the controlled qualities of an experimental setting with the dynamic pressures of real-world newsroom decision-making.

UC3 pioneers value-based co-creation of GAN content in partnership with citizens, seeking to harness generative AI for positive democratic engagement and digital citizenship. Experimental designs focus on inclusive citizen science approaches, with technical frameworks supporting multi-platform distribution and participatory content creation.􀀙􀀚 􀀉􀀍 􀀏􀀉􀀬􀀋􀀚􀀋􀀙􀀘 􀀱􀀘􀀐 􀀙􀀝􀀘􀀐􀀘 􀀛􀀉􀀊 􀀎􀀍􀀯􀀐􀀊􀀘􀀚􀀝􀀍􀀯􀀋􀀍􀀮 􀀥􀀃􀀄􀀘􀀾 􀀋􀀨􀀏􀀝􀀙􀀚 􀀉􀀍 􀀏􀀉􀀬􀀋􀀚􀀋􀀙􀀘