Deliverables

Work package 2: Definition & Mapping of GANs in social media

 

This report offers a broad examination of the challenges posed by AI-generated content from various disciplinary angles. It delves into topics such as the evolution of Adversarial Generative Network types, the policy implications of AI technologies, their impact on international relations, the semiotics of deepfakes and fake news, cross-legal approaches to regulation, and a case study focusing on the distribution of deepfakes in Bulgarian media. While it provides an introductory overview, a more detailed analysis is planned for Deliverable D2.2. 

 

This report covers various types of neural networks and explores the ethical issues surrounding AI. The theoretical framework of the project is introduced, focusing on understanding social media users' perceptions and trust in AI-generated content through a philosophy of technology lens and Actor-Network Theory approach. The report also delves into a semiotic analysis of deepfakes, highlighting their contextual and linguistic aspects. Additionally, a valid scale is developed to measure the perceived trustworthiness of GANs, and the Open Science approach is adopted for research data management. Finally, the report discusses the positive potential of generative AI and its implications for research practices under the new EU framework.

 

Work package 4: Designing regulatory innovations for infodemic risks mitigation

 

This deliverable investigates the political risks and negative implications due to the circulation of GANs technology by presenting, in particular, an interdisciplinary mapping of the geopolitical consequences, both at a national and international level, connected to the infodemic. This mapping process serves to define some strategic elements in understanding the infodemic phenomenon: first of all, the deliverable identifies the political actors involved in the diffusion and circulation of GANs, highlighting their interests for the purposes of controlling public opinion and the ideological orientation of the users. Furthermore, the impact of the infodemic in individual state communities and its effects on the international geopolitical structure is presented.

 

This deliverable details work conducted in preparing for the regulatory innovation framework that is aimed at mitigating risks stemming from the circulation and dissemination of AI-generated disinformation and which will be tested during Use Case 2 in M26 (April 2025). Structured into four major sections, the deliverable provides an overview of current policies, strategies and legislation aimed at tackling AI-generated disinformation within Europe with the goal of identifying best practices and determining significant considerations for Use Case 2. Section 1 provides an overview of European-level mitigation strategies including those related to AI governance, cybersecurity, self-regulation of online platforms, and media literacy and education initiatives. Section 2 then explores similar mitigation strategies that are implemented at the national level and attempts to provide a diverse picture of these approaches taken by countries in different European regions, particularly Western Europe, Western Balkans and the Baltic States. Section 3 then briefly explores possible technical detection strategies before presenting the SOLARIS method of anomaly detection which may be implemented into Use Case 2. Finally, Section 4 draws insights from these previous sections in order to propose considerations for Use Case 2 and the ongoing development of the regulatory innovation framework.