Üretken Yapay Zeka: Aldatma ve Yanlış Bilgiyi Ele Alarak Sorumlu Bir Gelecek İnşa Etmek


Coşkun M., Kara Aydemir A. G., Akman Kadioğlu E.

STS TÜRKİYE – METU UEAM : STS Meets Ethics Joint Conference, Ankara, Türkiye, 31 Ekim - 02 Kasım 2023, ss.1

  • Yayın Türü: Bildiri / Özet Bildiri
  • Basıldığı Şehir: Ankara
  • Basıldığı Ülke: Türkiye
  • Sayfa Sayıları: ss.1
  • Akdeniz Üniversitesi Adresli: Evet

Özet

The emergence of generative artificial intelligence (AI) has garnered significant global attention. In November 2022, Open AI, an AI research organization known for developing natural language processing models and AI technologies, introduced GPT (Generative Pre-trained Transformer) on their Chat platform and attracted millions of users. The impressive competence displayed by Chat GPT in handling complex tasks has sparked substantial enthusiasm, leading to extensive investigations into the potential of generative AI. Beyond Chat GPT, numerous other generative AI models are now in widespread use, serving a diverse range of users. Thanks to the capabilities offered by generative AI, its utilization has swiftly gained momentum, and as a result of its widespread adoption driven by these capabilities, there is an urgent need to focus on its responsible and ethical use and raise awareness since new general-purpose technologies encompass risks as well as opportunities. In line with this, before user habits have been established yet, it is critical to embrace a thoughtful and proactive stance considering potential advantages and possible risks. 

The first concern we must address is the transparency of the training dataset used by generative AI models. This issue holds significant importance because generative models create content relying on the dataset they were trained with.  At this point, it is crucial to note that similar to people, AI algorithms are vulnerable to biases, that result in their decisions being deemed “unfair”. Bias in AI is defined as inherent bias present in the data used for training AI, which could result in discrimination and various unintended societal consequences. Fairness in AI is to assess AI algorithms for potential bias based on demographic characteristics such as gender, race, religion, ethnicity, sexual orientation and development of algorithms to address this bias. Fairness entails the absence of any prejudice or favoritism towards an individual or group due to their inherent or acquired characteristics. To illustrate, we can point out problem in face recognition systems. Datasets of face recognition systems often display an imbalance in the representation of light-skinned and dark-skinned individuals. Solely relying on a binary classification of males and females is inadequate, as it is essential to incorporate race as a variable to create subdivisions, such as light-skinned males/females and dark-skinned males/females. Let’s consider a scenario where a generative AI model trains on predominantly negative text data. The model is then predisposed to generate negative content, potentially resulting in detrimental consequences such as the spread of misinformation or the propagation of cyberbullying.

Data biases in sensitive applications such as medical domains could be more dangerous. Artificial intelligence is increasingly utilized in various applications, including medical imaging within the field of medicine witn the hope that AI in clinical medicine will refine diagnostic accuracy and rule-out capabilities. Puyol-Antón and their colleagues (2021) conducted a study to evaluate the fairness of deep learning models used in cardiac MR segmentation, and performed an analysis by using UK Biobank for racial/gender groups to identify imbalance in training dataset if it exists. Cardiac structure, function, and the etiology of cardiovascular disease differ based on demographic characteristics such as race and gender. Therefore we can conclude that bias plays a crucial role in AI models designed for analyzing cardiac images. Their results indicated that racial bias exists in deep learning based cardiac segmentation models, and they argued that this bias  could be result of the unbalanced nature of training data. There is a racial bias but not gender bias when trained using the UK biobank database, which is gender balanced but not race balanced.

 Thus, the transparency of the training dataset employed in generative AI models stands as a critical issue. This stems from the profound influence this data exerts on the content these models produce. If the training data exhibits biases or inaccuracies, the content generated likewise reflects these shortcomings. Furthermore, the complexity and scale of training data for generative AI models pose additional challenges. This complexity makes it arduous for users to grasp the inner workings of the model and identify potential biases lurking within. To mitigate these concerns, it is imperative for developers of generative AI models to embrace transparency. This involves openly sharing information about the data's size, composition, and origin. Additionally, enhancing the models' transparency is essential, achieved through comprehensive documentation and user-friendly tools that facilitate a better understanding of their functioning. 

Within the context of “Generative Artificial Intelligence”, there are two main players: "experts - computer scientists" responsible for creating AI models, and "users" who consume the AI-generated content. It's crucial to emphasize the importance of users in this equation. Users of AI should approach the tools and processes with a balanced dose of skepticism. From this perspective, it is evident that users should be trained so as to cultivate critical thinking skills for questioning the credibility of AI-generated content. On the other hand, we could consider “users” as a two-sided issue. Besides consuming the content generated by AI, they could collaborate with AI. Inputs from users could be utilized for training datasets. That would be a mutual benefit for AI and human as well. 

In conclusion, bias mitigating strategies more and more important to ground for responsible use of generative AI since raises of the risk in misinformation and deception which could have negative and unintended consequences on the society. Datasets and algorithms should be ideologically balanced, diverse in terms of demographic and other related variables, ethical and fake detections should be performed. Rasing awareness on ethical and responsible use of generative AI is of promount importance in order for users neither to abuse the capabilities of generative AI and nor  being abused by AI generated content.