GenAI in EdTech cover image 1920x830

A child rights audit of GenAI in EdTech: Learning from five UK case studies


Ayça Atabey, Kim R. Sylwander and Sonia Livingstone

This report advances the DFC’s and 5Rights’ research on A better EdTech future for children and builds on our earlier DFC research on EdTech and education data.

Across all GenAI tools we studied, children’s perspectives were largely excluded from their design, governance and evaluation and all tools undermine children's rights to privacy and protection from commercial exploitation.(Ayça Atabey)

Executive summary

Generative artificial intelligence (GenAI) tools are increasingly embedded in digital services and products that are used for and in education (EdTech), raising urgent questions about their impact on children’s learning and rights. We take a holistic child rights approach to children’s learning to evaluate five GenAI tools used in education – Character.AI, Grammarly, MagicSchool AI, Microsoft Copilot and Mind’s Eye.

Using mixed sociolegal methods, including product walkthroughs, policy analysis and consultations with children, educators and experts around the world, we evaluate how these digital tools operate, and we assess the claims they make. These assessments are conducted in the light of the United Nations Convention on the Rights of the Child (UNCRC) and the Committee on the Rights of the Child’s General comment No. 25 regarding the digital environment.

Our primary focus is on how these tools uphold key rights under the UNCRC, including children’s rights to education (Article 28), privacy (Article 16), to be heard and have their views respected (Article 12), non-discrimination (Article 2), the principle of the best interests of the child (Article 3.1), the right to appropriate support for children with disabilities (Article 23), access to information (Article 17) and freedom of expression (Article 13).

While each GenAI tool offers the potential to facilitate learning through, for example, supporting creativity, communication and accessibility, each also presents notable risks. These risks arise because of opaque data practices, poor transparency, commercial exploitation through nudges, advertising and tracking, including from age-inappropriate adult website advertisers, all of which are incompatible with children’s best interests. Overall, many claimed benefits remain unverified, and the increasing presence of GenAI and its increasingly ‘by default’ integration reflects institutional or market priorities more than children’s needs and interests.

Across the five tools studied, children’s perspectives were largely excluded from their design, governance and evaluation. The case studies reveal that these tools undermine children’s rights to privacy and protection from commercial exploitation. The tools may support rights such as education, play, expression and access to information, potentially enhancing children’s learning. However, there is limited evidence for these benefits, especially a lack of evidence from diverse groups of children, younger children and those with disabilities.

Key findings from the case studies:

  • Although marketed as an educational and supportive tool, Character.AI poses risks to children’s rights and wellbeing due to insufficient safety safeguards (as evidenced by ongoing litigation), misleading or harmful content, and design features that foster unhealthy emotional dependency. While it can offer some creative and motivational benefits (e.g., Article 13), especially in informal learning contexts, the risks it poses, particularly for vulnerable children (such as young children, children suffering from mental health issues and children with disabilities), may amount to violations of children’s rights to information (Article 17), education (Articles 28, 29), health (Article 24), privacy (Article 16), and non-discrimination (Article 2).
  • While Grammarly can support children’s learning and expression, particularly for language learners and children with additional needs (Article 23), the audit found that Grammarly tracks and processes children’s data in ways that contradict its own privacy commitments. Further, it promotes inaccurate and potentially harmful AI detection tools that risk undermining student–teacher trust and lack child-friendly safeguards or remedies. These practices risk violating children’s rights to privacy (Article 16), protection from commercial exploitation (Article 32), and being treated in their best interests (Article 3).
  • MagicSchool AI makes strong claims about reducing teacher workload and supporting student learning. However, we identified a number of ways that its design and data practices risk undermining children’s rights. For instance, despite the company’s stated privacy commitments, children are, by default, exposed to commercial tracking (including from adult site advertisers), and chatbots have been found to provide misleading assurances and inappropriate or unsafe responses. This lack of safeguards, reliable emergency support, and rights-based information means that children’s rights to privacy (Article 16), protection from commercial exploitation (Article 32), information (Article 17), and health and safety (Articles 6, 24, and 19) are potentially at risk.
  • Microsoft Copilot, embedded in Microsoft 365 tools widely used in UK educational settings, is increasingly accessed by children despite originally being intended for adults. While it can support accessibility, expression, and reduce teacher workload, particular risks arise from its design and deployment. A Dutch data protection impact assessment (DPIA) identified significant privacy concerns, including fabricated personal data, opaque filtering, and extensive tracking. Our research revealed that when a child user accessed the service, commercial trackers were activated, including advertising trackers such as Google Ads. Copilot lacks a child rights impact assessment, clear opt-out options, and transparency about hidden filters. These practices can undermine children’s rights to privacy, agency, and protection from exploitation (Articles 16, 32–36), while overreliance risks weakening core skills and trust in education.
  • Mind’s Eye is a GenAI art expression tool developed to support children and adults with disabilities, using features such as eye tracking technology and predictive text to enable participation in creative tasks. It offers significant potential to enhance children’s freedom of expression (Article 13) and the rights of children with disabilities (Article 23), particularly for those excluded from mainstream GenAI tools. However, biased or inappropriate suggestions risk undermining expression and engagement, while privacy practices raise concerns about opaque data-sharing practices and lack of child-friendly rights mechanisms (Articles 16, 17 and 32). Without child-specific research, transparency and accessible safeguards, the tool risks reinforcing inequalities rather than removing them.

We conclude that GenAI can only enhance education if children’s rights are placed at the centre of its design, deployment and governance. A holistic, child rights-based approach should guide decisions about GenAI use in education, ensuring that children’s best interests, participation and full range of rights are prioritised, with particular emphasis on their right to education. The potential benefits of GenAI in EdTech can only be fully achieved when learning is recognised not as an isolated outcome, but as a process supported by interconnected rights. This means mandatory child rights and data protection impact assessments, accessible safeguards, and meaningful participation of children in decision-making. Without these, children’s right to education can be undermined, and GenAI risks deepening inequalities and exploiting children, rather than supporting their learning.

“The pandemic saw a rapid digitalisation of education, but in the five years since no one has stopped to think if this is benefiting children. This is having serious consequences: children are being tracked by erotic websites and chatbots are providing wrong emergency helplines risking lives and creating dependencies that can damage mental health. As the Government presses ahead with spreading AI far and wide, we must have rules in place to protect children and their education. In the Children’s Wellbeing and Schools Bill, parliament has a chance to ensure this happens.” (Colette Collins-Walsh, Head of UK Affairs at 5Rights Foundation)

Read full report here

Read the press release

 Image: created by main author in Canva