Introduction
Since ChatGPT, powered by OpenAI, became publicly available in 2022, followed by the proliferation of other generative artificial intelligence (GenAI) systems capable of generating human-like responses to text-based input, there has been a lot of public discourse and debate about the future of education. Amongst the numerous articles, TED talks, conference keynotes, expert panel discussions, workshops, friendly conversations and heated debates, that either predict an education dystopia or the genesis of a golden GenAI-empowered education era, a key question is emerging: “Should we be teaching children and young people about AI in schools?”.
While GenAI-powered tools can provide many positive enhancements in learning environments, such as personalized learning and support for accessibility, they also present several online harms, including misinformation, ‘artificial hallucinations’, i.e., “instances where an AI chatbot generates fictional, erroneous, or unsubstantiated information in response to queries” (Kumar et al. 2023), biased outputs, as well as safety and security concerns. Some sceptics also share a fear that over reliance on GenAI may lead to passive consumption of information and the reduction of critical questioning and deep thinking in learners. Significant work by UNESCO has already explored how GenAI innovations showcase the great potential of AI technologies for making education more accessible, especially in remote and rural areas and for students with disabilities, while also raising concerns related to “data confidentiality and the preponderance of major global corporations in this sector with guardrails lacking” (p.5).
With the popularisation of GenAI technologies and the integration of large language systems to existing familiar technological tools, children and young people often use them within everyday life as added extensions to the existing offer (e.g., Google offers AI-empowered responses in search results and Snapchat integrates an AI chatbot option). In 2024, the National Literacy Trust, surveyed more than 15,000 young people, aged 13 and 18 and found that three quarters of children aged 8 to 13 had already used generative AI, despite being intended for children who are at least 13 years old.
There is, therefore, an urgent necessity to enhance awareness, knowledge, and skills among children and young people on navigating effectively, safely and ethically their new GenAI empowered realities. Developing Media and Information Literacy (MIL), which UNESCO describes as a set of competencies that apply to information and media content in all formats is the foundation for accessing, retrieving, evaluating, using, creating and sharing AI-generated information and content “in a critical, ethical and effective way, in order to participate and engage in personal, professional and societal activities” (p.29). While regulation and setting safety guards are a priority, MIL can further protect against the potential risks associated with GenAI technologies, such as online harms and exploitation that are now intensifying with the rise of AI-generated content and the widespread proliferation of AI-generated deepfakes that present malicious AI manipulated information and media.
Generative Artificial Skills in Schools (GenAISiS project)
Addressing the challenges but also the opportunities created within a fast-evolving AI-enabled world, the ‘Generative Artificial Intelligence Skills in Schools’ (GenAISiS) project, funded by Responsible AI UK, aims to explore the responsible use of GenAI and advance the development of media and information literacy skills in learners, as a key pillar for the education of a democratic citizenship based on critical thinking and values of equality. Children and young people make up a significant portion of active internet users globally, with growing numbers connecting online at a younger age, already using popular GenAI tools for various purposes, including entertainment, creativity, and academic tasks.
Partnering with secondary school students, the project aims to co-create open educational resources, articulating student voice and enacting student experience via fictitious characters and cartoon video stories. A co-produced open educational toolkit with resources on GenAI issues related to data privacy and safety, ethical awareness and responsible use, prompt engineering, information literacy, bias, misrepresentation and responsible use, transparency and accountability will be widely disseminated via open workshops. GenAISiS aims to offer a transformative pedagogical approach, raising public awareness, promoting equity and amplifying the student voice.
The project also aims to recognise and empower school librarians as key agents in fostering the responsible use of GenAI, with the Chartered Institute of Library and Information Professionals in Scotland (CILIPS) and three school librarians (Figure 1) as direct partners. Equity and inclusion is at the heart of the project through direct engagement with underrepresented groups (by sociodemographic characteristics and learning differences), ensuring diverse participation with support from CILIPS BAME, LGBTQ+ and Disability networks.

Figure 1. Project Partners
Furthermore, the project contributes to the pressing need to equip children and young people with key Media and Information Literacy (MIL) skills related to GenAI, aligning with the United Nations’ ‘Convention on the Rights of the Child’ (UNCRC) (1989). For example, UNCRC Article 16. ‘Every child has the right to privacy’, considered via the MIL lenses in the context of GenAI, relates to equipping children and young people with an understanding of how AI collects and processes data and how to critically and safely engage with AI-powered platforms, recognising privacy risks and following ethical AI use without compromising privacy and personal data.
Similarly Article 17: ‘Every child has the right to access reliable information from a variety of sources’, when interpreted in relation to GenAI, means that young people have the right to access accurate, diverse, and trustworthy information across GenAI tools, developing critical digital literacy skills to be able to evaluate AI-generated content, recognizing reliable information versus ‘artificial hallucinations’, misinformation or biased outputs produced by GenAI models, crosschecking information returned with credible and reliable sources. It also means ethical engagement with content that is used for positive outcomes, such as learning and creativity, rather than engaging with misleading or harmful content.
Finally, Article 19: ‘Every child has the right be protected from violence, abuse, and neglect’ in view of GenAI use, is connected to equipping young people with the skills to recognize, report, and protect themselves from AI-driven misinformation or manipulation.
The Information Commissioner’s Office (ICO) has explained in more detail what The United Nations Convention on the rights of the child means for the online environment, considering children’s best interests. Other work in this area includes The Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, an initiative by the Council of Europe to establish AI guidelines and regulations for protecting human rights, democratic principles, and the rule of law.
Project Objectives
The project objectives (Figure 2) include an animated video cartoon story, an open educational toolkit and three openly available training workshops, aiming to increase overall numbers of young people’s information and media literacy skills and the number of schools delivering digital skills-related activities. The work allows learners to express their own voices and exchange in dialogue about the challenges/opportunities of GenAI technologies, develop confidence and dispel a sole focus on negative perceptions of GenAI.

Figure 2. Project Objectives
Empirical Data and Cartoon Story Development GenAISiS has already completed the collection of authentic empirical data, via GenAI-related practical activities, questionnaires and focus groups (participants were 13 years old). The practical activities involved using a GenAI tool to search for information and create a visual on the topic of ‘UNESCO Sustainable goals’, placing participants in the ‘shoes’ of a cartoon character they created and critically exploring key directions in GenAI use, such as transparency, information literacy, bias and privacy, critically reflecting on both positive and negative outcomes (Figure 3).

Figure 3. Visits to schools as part of the GenAISis project
Based on the empirical data, the project team is currently developing a co-created GenAI video cartoon educational story with young people. Co-creation, as Bovill notes, is a powerful way of fostering inclusivity and building agency in students, making learning content more relatable and engaging. The cartoon story is created using Plotagon Story, a commercial animation video software that allows to build engaging stories with scenes, characters, transitions, emotions and sound effects. The story includes a total of five thematic episodes that address a range of GenAI related skills:
The first episode will address ‘Understanding GenAI’, focusing on how GenAI systems work and their strengths and limitations. A draft cartoon video story for this theme titled: ‘An Introduction to Echo Hey-Aye’ is now available via this link. The story centres on the introduction of Echo Hey-Aye, the first generative artificial intelligence (GenAI) teacher at an imaginary school, called Linden Stone. On their first day, Echo Hey-Aye meets with the students, who are curious and excited to find about the new GenAI teacher. Echo explains their capabilities, including generating text, images, and ideas, as well as assisting with various subjects. The students ask several questions about Echo’s functions, including how they learn from data, inviting Echo to offer key concepts such as large language models, pattern recognition and prediction in AI (Figure 4).

Figure 4. ‘An Introduction to Echo Hey-Aye’ video cartoon story
Additional episodes will focus on ‘Data Privacy & Safety / Ethical Awareness & Responsible Use’, highlighting examples of data privacy and safety when interacting with GenAI, including safety issues, ethical considerations and risks in data collection; ‘Prompt Engineering’, addressing and crafting clear and detailed questions (prompts) to obtain accurate and relevant responses; ‘Information Literacy, Bias and Misrepresentation’ focusing on critically evaluating and fact checking and verifying the accuracy and relevance of GenAI generated content, without accepting AI responses at face value and, finally, ‘Responsible Use, Transparency and Accountability’, promoting responsible use, transparency and accountability in GenAI design and interactions, with awareness of the impact and benefits of GenAI related decisions.
GenAISiS Educational Toolkit
The work will also include the development of an educational toolkit with activities and resources focusing on the five thematic directions set in the cartoon video stories that can be utilised by education professionals (teachers, school nurses), parents/carers and anyone who supports young people. The toolkit will contain useful resources linked to the five themes, including learning outcomes, lead-in questions, a selection of learning activities, ideas for educators (tagged by function), and additional information in the form of events and case studies (Figure 5).

Figure 5. Example of Toolkit
Project Outcomes and Dissemination
The project outcomes (video cartoon stories and educational toolkit) will be openly shared with Creative Commons Licenses online. The resources are aimed at school teachers, school and public librarians, school nurses, parents and everyone who engages with young people and is interested to support them in navigating safely, effectively and ethically their online environments. The project design has already been showcased in the media (on STV News in November 2024) (Figure 6).

Figure 6. STV News Press (November 2024)
Several dissemination events will also be organised for the project, both in person and online. Currently an open in-person free event has been organised to take place on the 18th of June 2025, 1:30-3:30pm, at Mitchell library in Glasgow, in collaboration with the Chartered Institute of Library and Information Professionals. Tickets are available via this link. The work will be completed in August 2025.
Project Team: Dr Konstantina Martzoukou (RGU), Dr Pascal Ezenkwu (RGU), Sean McNamara (CILIPS), Kirsten MacQuarrie (CILIPS), Emma Grey (Forfar Academy), Ioannis Panayiotakis (Eastwood High School), Diane Scott (Hazlehead Academy) and Research Assistants, Beulah Lowry and Palika Vithana.
Funder: This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/Y009800/1], through funding from Responsible Ai UK (RAI-SK-BID-00024).
Suggested citation: Martzoukou, K. (2025). Generative Artificial Intelligence Skills in Schools (GenAISiS project). SLSS Research Blog (RGU), 2025/05. Available at: https://rgu-slss.blog/?p=2219