Trust in AI Unveiled: Navigating Deepfakes and Revolutionizing Trust Metrics
From Deepfakes' Deception to SOLARIS Project's Trust-Building Endeavor. Explore the intricate web of trust in AI, exposing the challenges posed by deepfakes and the SOLARIS project's groundbreaking psychometric scale redefining trust in GANs-generated content.
by Letizia Aquilino, PhD candidate, Università Cattolica del Sacro Cuore, DEXAI – Artificial Ethics
Photo by Mojahid Mottakin on Unsplash
Trust is a fundamental cornerstone for the widespread acceptance of Artificial Intelligence (AI). As AI applications continue to proliferate and deeply embed themselves in various aspects of daily life, unraveling the dynamics of trust becomes increasingly crucial. The level of risk associated with different AI applications significantly influences individuals' health and socioeconomic status.
For instance, low-risk applications like Netflix's recommendation algorithm and e-commerce chatbots provide suggestions and assistance without major implications. In contrast, higher-risk AI decision assistants in areas like banking underscore the importance of trust.
To trust or not to trust: navigating online contents
Interestingly, there are instances where people unknowingly trust AI, such as with deepfakes. Deepfakes are AI-generated hyper-realistic videos or images, often manipulating faces and voices, creating convincing but false content. These sophisticated manipulations blur the line between reality and AI, emphasizing the need for heightened awareness and discernment.
Moreover, trust in online content varies based on characteristics like cognitive abilities, ideological beliefs, age, social media usage, and relationship dynamics. Cognitive factors such as analytic thinking and information verification behavior influence trust, as do ideological factors like conspiracy ideation and institutional trust. Additionally, personal traits like altruism, self-esteem, and narcissism play a role.
Contextual factors include the presence of warnings, virality, source presentation, and emotionally charged headlines. This complexity reflects the multifaceted nature of trust, underscoring its impact on AI acceptance and shaping attitudes towards both online and AI-generated content.
Facing fake content: the SOLARIS project’s trustworthiness scale
SOLARIS project's primary objective is to pioneer a psychometric scale for assessing the perceived trustworthiness of content produced by Generative Adversarial Networks (GANs). Currently, there is a conspicuous absence of valid and comprehensive scales measuring the perceived trustworthiness of GANs-generated content, hindering the nuanced understanding required for evaluating deepfakes and similar content.
The project aims to fill this gap by designing a multidimensional scale focused on individuals' perceptions of content-related aspects. This scale, distinct from existing measures capturing individual characteristics, seeks to determine the likelihood of GANs-generated content being perceived as trustworthy.
The scale encompasses at least three perceived features, such as vividness related to the person in the video, perceived source credibility, and information believability in the message. The final number of dimensions emerges organically during the scale development process, incorporating insights from item generation interviews and a thorough review of existing measures.
Initially intended for development in two languages, the scale has been expanded to include three languages—English, Slovene, and Italian. While integral to the project's very own experimental designs, it will be made publicly available for external studies through publications and conference presentations, fostering broader utilization and understanding of trust in GANs-generated content.
The future of trustworthy AI: where to?
The psychometric scale developed stands as a crucial component of the project's experiments, facilitating the rigorous assessment of perceived trustworthiness in content generated by Generative Adversarial Networks (GANs). Its integration into experimental frameworks ensures robust testing and validation, providing valuable insights for SOLARIS’s ongoing research endeavors.
Beyond its immediate application, this scale holds enduring significance for future ventures. Its reliability positions it as a versatile instrument for assessing trustworthiness, laying a foundation for the ethical development of AI, particularly in the domain of educational applications. The scale emerges as a key asset in fostering responsible AI deployment and instilling trust in educational contexts.
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. The European Union cannot be held responsible for them.