INTERNATIONAL EASTERN CONFERENCE ON HUMAN-COMPUTER INTERACTION, Erzurum, Türkiye, 23 - 25 Kasım 2023, ss.1
We are currently witnessing one of the most important technological developments with the advent of generative AI. The increasing prevalence of high-resolution cameras, high-speed internet and
machine learning capabilities, especially deep learning, are enabling the emergence of emotion (affective) computing. Advancements in emotional computing has been expected to have a significant impact on education. Many people presume digital technology less problematic element of contemporary education. While they offer great opportunities, they can also lead to inherent problems. In the rapidly evolving landscape of education, the incorporation of emotion computing has provided many affordances, especially in K-12 education, but also raised ethical issues. In this study we attempt to explore interaction of adolescents with AI systems and draw attention to ethical issues of emotion computing in K-12.
Young generation, often called “digital natives” effortlessly interacts with generative AI systems. They were born in a world where technology becomes invisible, have grown up where AI technologies are ubiquitous, and becoming like an old friend with these virtual companions. However, this familiarity does not exists without its complexities. Young generation seeks a deeper
level of interaction and connection with those technologies, expecting them to recognize and adapt themselves to their emotions, much like a “trusted” old friend. Conducting an adapting modifying and empathizing affective responders is a more achievable goal than the demanding task of maintaining consistent responses. Interestingly, despite their comfort with AI, this generation maintains a clear distinction between AI and human beings. They perceive AI as a machine, not a natural entity, and are aware of the limitations and boundaries in their interactions with AI. This awareness fosters a unique relationship between humans and computers, allowing them to balance their desire for human-like emotional engagement. These insights can be applied to the design of new AI-supported education systems. Understanding this behavior of young generation and meeting their expectations from generative AI could transform the interplay between education and technology. In accordance with this argument, educational technology researchers and practitioners, policymakers, and other stakeholders need to purse meaningful and responsible ways to integrate emotion computing into K-12 education. This mutually beneficial relationship could lead to various useful implementations; such as enhancing
skill development of individuals with autism or embedding assessment.
Considering the familiarity and ease of
communication with machines, teaching and
learning environment design for this generation
should be aligned with a strong emphasis on ethical
considerations. If the curriculum can harmoniously
incorporate these aspects, generative AI has the
potential to enhance not only academic growth but
also emotional intelligence in this generation.
Ethics play a critical role in the interaction between
this generation and AI. Privacy, data security,
emotional manipulation, autonomy, fairness, and
bias are key concerns. The long-term psychological
and societal effects, along with cultural issues,
need to be addressed, since this generation desires
AI to adapt itself to their emotional state. To realize
this issue, AI need to collect data and then decides
the emotion of the user based on its trained
model. First, the data privacy and security are very
crucial for this progress since the collected data is
about emotional state of users. This data can
consist of gestures of users (image processing),
voice levels (audio processing) and writings (text
analysis), thus there would be some private data of
users which should be considered sensitive.
Therefore, to ensure data privacy and security, the
collection and use of emotional data must be
handled with the highest level of care. Second, the
emotional manipulation desired from the AI should
be trained and served very carefully in order to
avoid rudeness or emotional disruption,
particularly for teenagers. Lastly, generative AI
technologies are criticized due to data accuracy
and accountability. Generative AI technologies
need to be held accountable for data accuracy and
the ethical behavior of AI responses to avoid bias in
emotional responses. In other words, our AI
responses should not have bias in their drive
emotional responses.
To conclude, integrating emotions to computing for generative AI systems in their relationships with new generation would be an opportunity for new education system. To stand in opposition to this technological transformation is akin to resisting the revolutionary impact of the printing press. Instead of prohibiting the generative AI completely in education as an old-fashioned way, we should use it in an ethical and responsible way by teaching new generations how and why to use it. Also, we should add emotional and affective abilities to AI in order to interact with users to avoid a toxic robot speech. Then this perspective of education design progress would result in “A Powerful AI Supportive Education System”. To ensure responsible use of emotion computing in K-12 educational settings, school administrators and both pre-service and in- service teachers must receive relevant education and training. It is essential for teachers to possess the necessary knowledge and literacy for the successful integration of emotion computing practices and tools into their pedagogical approach. A coordinated effort is necessary at the national level to create detailed policies that tackle the ethical, privacy, and educational consequences of emotion computing implementation in K-12 environments.