The Crucial Role of Inclusiveness in AI

AI ETHICS

Principle of AI — Inclusiveness

Image create using ChatGPT

I was fascinated by AI’s potential. It seemed like the future, with endless possibilities to revolutionize healthcare, education, and legal systems.

But one thought kept nagging at me: Who benefits from this technology? It dawned on me that if AI only serves a select group, it could widen existing social inequalities. If AI is only built for those with the most access, are we moving forward?

My work with the Internet of Things (IoT) and smart cities has already shown me how technology, while promising to enhance urban living, often caters to those with the resources to use it.

That same realization hit me with AI: AI must be inclusive.

It has to serve everyone, especially the vulnerable, or we risk creating deeper societal divisions. This is why AI must align with the principles of our Federal Constitution, which emphasizes equality, justice, and fairness for all.

Building Inclusiveness into AI Development

The first step to creating inclusive AI is ensuring the systems are designed for everyone, not just the privileged few.

I remember discussing this with a colleague. I asked, “What happens when AI systems in healthcare only use data from urban hospitals that serve wealthier patients?”

We both knew the answer.

Those systems wouldn’t be effective in rural areas, where diseases manifest differently and healthcare resources are more limited.

This example stayed with me. Imagine an AI designed to detect skin cancer, I thought. If it’s only trained on images of light-skinned individuals, what happens when it’s used on darker-skinned patients?

The answer is obvious: it could misdiagnose or fail to identify the condition entirely. Such bias has serious consequences—it could lead to poorer healthcare outcomes for large sections of the population.

That’s why AI systems need diverse data. We can ensure that AI serves everyone equally by training models on datasets that include various skin tones, environments, and lifestyles.

I remember thinking, This is more than just good design — it’s about justice. AI has to reflect the diversity of the people it’s meant to serve, or we’re not living up to our national values of fairness and equality.

Addressing the Needs of Vulnerable Groups

Then, there’s the issue of how AI tools can meet the specific needs of vulnerable populations.

AI should be for more than just those who live in developed, well-connected areas or who can afford the latest technology. It must serve everyone, especially those in need.

One day, I was thinking about the legal system and how difficult it is for many people to get proper legal representation.

I thought, “What if an AI could provide essential legal advice to those who can’t afford a lawyer?” This idea felt like a breakthrough. AI could help people understand their legal rights, assist in drafting contracts, or even generate legal documents.

But then another thought came to mind: What about people who struggle with reading? Or those without reliable internet access?

For AI to be inclusive, it must account for these users.

I imagined an AI legal assistant offering voice guidance for people with lower literacy levels or an AI working offline to reach remote areas. It became clear to me that AI could be the key to equal access to justice—but only if it’s designed to include everyone.

This aligns perfectly with our national principles of fairness and equality.

Ensuring Diversity Among AI Developers

As much as inclusiveness is about the technology itself, it’s also about who is building it. A diverse team of developers brings different perspectives, helping identify and address biases early on.

Are the people building this AI as diverse as those it serves?

Education is a perfect example of how a lack of diversity in AI development can lead to unintended consequences.

I once discussed AI-powered systems for grading student essays. I wondered, “What if the AI is biased towards a specific cultural or linguistic group?

Imagine a system that unintentionally favors students from urban areas who are more familiar with specific cultural references. Students from rural or minority backgrounds could be unfairly marked down simply because the AI doesn’t understand their context.

That’s where a diverse team of developers comes in.

They would bring a broader range of experiences and insights, helping to design AI systems that are fairer and more inclusive.

I pictured a scenario where developers from various backgrounds are involved in creating an AI-powered educational tool. A diverse team would recognize that not all students have the same internet access, so they design the system to work offline or in low-bandwidth environments.

That’s how AI can truly level the playing field for students, I thought. It’s about giving every student an equal chance, no matter where they come from.

Moving Forward with Inclusive AI

As I reflect on the future of AI, one thing becomes clear to me: Inclusiveness is not a choice; it’s a necessity.

If we’re not careful, AI could widen the gaps we want it to close.

That’s why we need to ensure that AI development techniques are inclusive, tools are designed to meet the needs of vulnerable groups, and that teams behind these systems are as diverse as the society they serve.

In my work with IoT and smart cities, I’ve always aimed to make technology accessible to as many people as possible.

The same approach must be taken with AI.

By focusing on inclusiveness, we can ensure AI systems benefit everyone, which aligns with our Federal Constitution and National Principles. This isn’t just about technology; it’s about creating a fairer, more just world.

In the end, I realized that inclusiveness in AI isn’t a luxury—it’s essential.

If we don’t take inclusiveness seriously, we risk creating technology that serves only the privileged and leaves the rest behind.

And that’s not the future I want to build.


Discover more from Dr. Mazlan Abbas

Subscribe to get the latest posts sent to your email.

Unknown's avatar

Author: Mazlan Abbas

IOT Evangelist

Leave a Reply