AI and Inclusion: Who Is Being Left Behind?

Artificial Intelligence has become a powerful force shaping education, leadership, and society. From adaptive learning platforms to automated recruitment systems, AI tools now influence how opportunities are created, and how they are distributed. Yet amid the excitement over innovation, a quieter, more urgent question is often left underexplored …… who is being left behind?

Inclusion cannot be assumed in AI. As scholars such as Kate Crawford (Atlas of AI, 2021) and Ruha Benjamin (Race After Technology, 2019) have shown, AI systems are built on human data, and that data carries the biases, blind spots, and exclusions of the societies from which it is drawn. If we do not design for inclusion, we risk building technologies that reinforce exclusion at scale.

Consider the field of education.

AI-driven personalised learning systems promise to tailor content to individual learners. But if these systems are trained on datasets that reflect primarily neurotypical learning patterns, neurodivergent students may find themselves marginalised by the very tools meant to support them.

As Professor Sue Fletcher-Watson has argued, neurodivergent learners often process information in ways not well captured by mainstream data models. Without inclusive design, AI can unintentionally narrow rather than broaden learning opportunities.

Cultural and linguistic diversity presents another challenge. AI language models tend to be dominated by data from major global languages, particularly English. This bias affects everything from automated translation to voice recognition to AI-driven content generation. As Timnit Gebru and colleagues highlighted in their seminal paper on large language models (2021), underrepresentation of minority languages risks further entrenching digital inequities.

For persons with disabilities, the promise of AI is real, but so are the pitfalls. Tools such as AI-generated captions or voice interfaces can empower accessibility, but when designed without deep consultation with disabled communities, they often fail to meet real-world needs. Worse, they can create new barriers if educators and leaders assume that "AI handles accessibility" without ensuring quality or appropriateness.

Socio-economic disparities also loom large. Many AI-powered educational tools assume access to reliable broadband and modern devices. In contexts where connectivity is limited, whether rural areas or under-resourced urban communities, these assumptions widen rather than bridge existing educational divides.

This is why ethical leadership in AI is urgently needed, especially in education and community development.
We must move beyond seeing inclusion as a post-hoc "feature" of AI systems. Instead, it must be embedded in every stage:

  • in design,

  • in dataset curation,

  • in model evaluation,

  • in ongoing human oversight.

At Seeds of Knowledge, we believe that preparing learners for the future includes preparing them to ask critical questions about AI:

  • Who designed this system?

  • Whose experiences shaped the data?

  • Who benefits, and who may be marginalised?

  • How can we advocate for technologies that serve the whole of society, not just a privileged few?

As the European Commission’s High-Level Expert Group on AI (2019) emphasised, trustworthy AI must be lawful, ethical, and robust, not just in technical terms, but in its social impacts. We cannot afford to treat inclusion as an afterthought in this new era. AI is shaping the very architecture of opportunity. The time to ensure it serves all, especially those most often left at the margins, is now.

Next
Next

AI Ethics: Why It Must Be the Heart of Every Conversation