The Story Behind Favoriot – Part I: The Humble Beginnings of Favoriot

When I look back to 2017, I vividly recall the early days of building the Favoriot IoT Platform. We started small, working in a modest room with only a few dedicated staff members. It was a humble beginning, but our vision was anything but small.

The Idea Behind the Platform

The idea for the platform arose from a simple but pressing need—to support our first IoT product, Raqib. At the time, we realised there was a significant gap in the market for an IoT platform that could cater to businesses and developers in a user-friendly and accessible way. But as the idea began to take shape, doubts crept in.

Can we really compete with giants like AWS and Azure?” I often asked myself. The thought was daunting. Competing against well-established platforms felt like an insurmountable challenge. We knew that many people didn’t know who we were, and there were plenty who doubted the capabilities of a small team attempting something so ambitious. But despite the naysayers, we pressed on. Deep down, we were confident in our technology’s potential and our ability to deliver something meaningful.

A Bold Move: Offering the Platform for Free

By 2019, we decided to take a bold step—offering the Favoriot IoT Platform to the public for free. “Maybe this will be the best way to attract attention and build an early user base,” I thought. It seemed like a logical approach. But the reality didn’t match our expectations. The response was underwhelming, to say the least. Only a handful of people showed interest, and our efforts didn’t yield the results we hoped for.

Finding a New Strategy: Education

Sitting down with the team, I voiced my concerns. “We need to do more. This isn’t enough. We must find a better way to introduce our platform.” The team brainstormed tirelessly, and that’s when the idea of offering IoT courses emerged. It was a lightbulb moment. We realised that one of the best ways to attract users was through education—teaching people about IoT while simultaneously showcasing the capabilities of our platform.

But what if no one registers for the courses?” The doubt lingered. Investing time and resources into something that might not succeed was nerve-wracking. Still, we decided to take the plunge. We structured the courses so that participants could learn the fundamentals of IoT and get hands-on experience with the Favoriot IoT Platform.

Success Through IoT Education

Alhamdulillah, the effort paid off. The response to the courses was beyond encouraging. Participants appreciated the knowledge they gained and began to explore our platform in more significant numbers. It was a turning point for us. From those early courses, word began to spread, and the Favoriot IoT Platform started gaining traction. The numbers grew steadily; today, I’m proud to say that we have over 9,343 users from 111 countries. Seeing the global reach of something we built from scratch fills me with immense pride.

I can’t believe we’ve come this far,” I shared with the team during our discussions. “But this is just the beginning. We still have so much more to achieve.

Expanding to New Horizons

With the foundation now solid, we set our sights on the future. Our next goal is to expand our presence to neighbouring countries. The team and I are confident that the Favoriot IoT Platform can achieve even greater success beyond Malaysia. This journey has taught us that with effort and dedication, even the boldest dreams are within reach.

Ready for the next phase?” I asked the team one day, knowing full well that the challenges ahead would be just as demanding as the ones we’ve overcome. Their answer was clear and resolute. We are ready to take on the IoT world.

Reflecting on the Journey

Looking back, I see how every step we took was filled with challenges and uncertainties, but it was also marked by resilience and an unrelenting drive to succeed. No matter how small, each decision was crucial in shaping where we are today. The journey has been extraordinary, from a tiny room with a handful of staff to a global platform with thousands of users.

A Glimpse of What’s Next

This is only part of the story behind the development of the Favoriot IoT Platform. There’s much more to share—the challenges we faced, the lessons we learned, and the milestones we celebrated. Stay tuned for Part II, where I’ll delve deeper into the obstacles we’ve encountered and how we’ve navigated them to reach where we are now.

This journey is a testament to what can be achieved with a clear vision, unwavering determination, and a great team by your side. I hope our story inspires others to pursue their big or small dreams. After all, every outstanding achievement starts with a single step and the willingness to take it.

More Stories

My Startup Journey

AI Agents: The Game Changer for 2025 and Beyond

AI Journey

IoT and AI Agents: A Perfect Team

As I get ready to start my day, I think about something I often ask myself: “What exactly is an AI agent?” It’s not just another complicated tech term; it’s an idea changing how we use technology and go about our daily lives.

An AI agent is like a super-smart assistant that can understand its surroundings, make decisions, and take actions to meet specific goals— all without needing constant instructions.

But what does “understand its surroundings” actually mean?

Picture a world where IoT sensors are like the eyes and ears of an AI agent. These sensors are embedded everywhere: in your home, your car, factories, farms, and even in your city.

They collect real-time data about everything — temperature, motion, air quality, energy use, etc.

An AI agent processes this data to get a clear picture of what’s happening and decides what to do next.

For example, a smart thermostat uses temperature sensors to learn when to heat or cool your home based on your habits.

On a larger scale, in a smart city, IoT sensors on traffic lights help an AI agent manage traffic flow, reducing congestion during rush hour.

Why AI Agents Matter

Picture a day when your assistant reminds you about meetings, plans your route to avoid traffic, and even orders your favourite snacks when you’re running low.

It’s not just about making life easier; it’s about being smarter with your time and energy.

Here are some reasons why AI agents are so helpful:

  1. Getting Things Done Faster: AI agents handle repetitive tasks like summarizing emails or making reports, leaving you free to focus on more significant ideas. Imagine never having to waste time on tedious chores again.
  2. Making Life Personal: AI agents learn about you — what you like and need — and give you tailored suggestions. Whether finding a new show to watch or giving health advice, they make things feel just right for you.
  3. Handling Big Loads: AI agents can manage a ton of work without slowing down in businesses. From answering customer questions to keeping track of supplies, they handle it all smoothly.
  4. Getting Smarter Over Time: Unlike regular systems that don’t change, AI agents learn from every interaction and improve continuously. They adapt to new challenges and keep getting better at helping you.

As I think about these benefits, I wonder: “Why is 2025 the year for AI agents?” The answer lies in how technology advances, businesses are ready, and people are trusting AI more.

Why 2025 Will Be the Year for AI Agents

There are several reasons why 2025 is shaping up to be a big year for AI agents:

  1. More Companies Using Them: Businesses realise how helpful AI agents can be. From helping customers to improving healthcare, these agents are becoming a must-have tool.
  2. Better Technology: AI agents are more intelligent than ever, thanks to language understanding and decision-making improvements. They can now handle complex tasks and make real-time choices.
  3. People Trust Them More: More and more people are comfortable using AI tools. For example, in 2024, the number of people using chatbots during the holiday shopping season skyrocketed — a clear sign that consumers are embracing AI.
  4. Industry Innovation: New AI tools designed for specific tasks make it easier for businesses and individuals to adopt these technologies. For example, personal virtual assistants are now more competent and more accessible than ever.

It’s not just about cool gadgets,” I remind myself. “It’s about creating a system that works for everyone.” That’s why things like data security and ethical AI are so important.

IoT and AI Agents: A Perfect Team

As someone who works closely with IoT (Internet of Things), I often think about how it fits with AI agents. IoT connects devices — like your smartwatch, fridge, or car — creating a network that collects loads of data.

But data alone isn’t valuable. AI agents are like the brain that makes sense of everything and acts on it.

Here’s how they work together:

  1. Making Data Useful: IoT devices collect information, but AI agents turn it into insights you can use. For example, a smart home can learn your daily habits and automatically adjust the lights or temperature to save energy.
  2. Deciding on Their Own: IoT sensors check the soil in agriculture, and AI agents choose when and how much to water crops. In healthcare, wearable devices monitor your health, and AI agents alert your doctor if something looks off.
  3. Simple and Easy to Use: Instead of confusing dashboards, you can just ask your AI agent questions like, “How much energy did we save this month?” and get clear answers.
  4. Managing Big Systems: Managing IoT networks can be a headache as they grow. AI agents make it simple by adapting to new devices and managing everything efficiently. In smart cities, they can control traffic lights to ease congestion.
  5. Improving Shopping Experiences: IoT sensors track what’s on the shelves in stores, and AI agents reorder items before they run out. It’s all about making sure customers get what they need without waiting.
  6. Helping the Environment: AI agents and IoT can reduce waste and save energy. Smart grids distribute electricity more efficiently, and AI agents optimize energy use at home and in factories.

Imagine this,” I think, “Your car tells your home’s AI agent you’re on your way. The garage door opens, the lights turn on, and the air conditioning adjusts to your preferred temperature. That’s the future — where everything works together seamlessly.

The Road Ahead

Of course, there are challenges. People worry about privacy and security. “How do we ensure AI agents work for us and not against us?” That’s a question I often come back to.

But the potential is enormous.

AI agents combined with IoT can change industries, simplify life, and solve problems in ways we’ve never seen before.

From improving healthcare to creating smarter cities, the possibilities are endless.

As I wrap up my thoughts, I feel excited about what’s to come. The year 2025 will be a turning point for AI agents and how we live and work.

It’s a year that could redefine everything.

So, let’s get ready to embrace this change. Let’s use AI agents to create a more imaginative, more connected world.

The future isn’t something we wait for — it’s something we build together.

The Future of Writing: Can AGI Rival Today’s Writers?

AI REVOLUTION

What if AGI takes over the role of a writer?

Image created by ChatGPT based on this story.

What if AGI becomes as good as me at writing?” I muttered, half amused and half concerned. The thought lingered, almost taunting me.

Could a machine, no matter how intelligent, truly replicate the art of storytelling?

Writing has always been a deeply personal journey for me.

It’s not just about the words; it’s about weaving experiences, emotions, and lessons into a narrative that connects with readers.

But what if AGI learns to do that too?” I challenged myself. The question refused to go away.

The Essence of Writing

I often remind myself why I write.

It’s about more than sharing knowledge; it’s about creating connections.

I poured my soul into every word when I wrote about my journey of building FAVORIOT.

Those articles weren’t just facts — my triumphs, frustrations, and dreams in black and white.

Could AGI ever capture that?” I asked myself aloud.

It could replicate the structure, even the tone, but would it feel the pride I felt when describing our successes?

Would it understand the weight of the sleepless nights behind those stories?

Writing is as much about the process as it is about the product. “Machines don’t have sleepless nights,” I chuckled, though the thought didn’t comfort me.

What Would AGI Need to Learn?

I thought as I sipped my coffee that if AGI wanted to rival human writers, it would have to overcome three major challenges.

First,” I said, leaning back in my chair, “context and nuance.” Writing isn’t just about stringing words together; it’s about understanding the world in all its complexity.

When I write about IoT, I’m not just describing technology — I’m addressing real-world problems, cultural challenges, and user needs. “Can AGI grasp that kind of complexity?” I wondered.

Second, empathy,” I continued, almost as if I were explaining it to an invisible audience. Readers connect with stories because they resonate emotionally.

When I write about entrepreneurship, I think about the struggles of young dreamers reading my words.

Would AGI know how to address their hopes and fears, or would it just give generic advice?

And third,” I paused for effect, “failure.” Every writer knows the pain of scrapping drafts, rewriting paragraphs, and starting over.

Those failures teach us what works and what doesn’t. “Can a machine learn the value of failure? Can it be self-critique like I do?” I mused.

Where AGI Might Excel

I reminded myself that it’s not all doom and gloom. AGI could bring remarkable strengths to the table.

I imagined it working tirelessly, synthesising vast amounts of information in seconds, and crafting perfectly structured articles. “It would be like having a research assistant who never sleeps,” I thought, smiling at the idea.

AGI could adapt its style to suit any audience.

It could shift gears effortlessly, whether writing for IoT experts, poetry lovers, or aspiring entrepreneurs. “Imagine the possibilities,” I said, almost excited now.

But the excitement was tempered by a nagging thought: “Would it feel like cheating to rely on AGI for something so personal?

Human and Machine: A Collaborative Future

Maybe we don’t have to compete,” I said, voicing the thought brewing. “Maybe AGI can be a collaborator, not a rival.

I imagined using AGI to handle the technical aspects of my articles, freeing me to focus on storytelling and emotional resonance.

It could help me write faster, but the heart of the article would still be mine,” I reasoned.

I’d already seen glimpses of this collaboration.

Tools like Grammarly refine my writing, while AI-driven platforms assist with research and brainstorming. “It’s not replacing me; it’s enhancing me,” I concluded, feeling more optimistic.

The Ethical Dilemma

But then,” I hesitated, “what happens when AGI starts writing independently?

If it writes an article that’s indistinguishable from mine, who owns the content? And how do we ensure transparency? “Would readers still value the writer or only care about the content?” I asked, troubled by the implications.

Writing, at its core, is a personal act.

It’s an extension of one’s thoughts, experiences, and beliefs.

If AGI mimics that perfectly, does it diminish the value of human expression? “Or,” I wondered, “does it make human stories even more precious?

A Hopeful Outlook

I glanced at the clock.

I’d spent over an hour lost in this internal debate, yet I felt no closer to an answer. “Maybe it’s not about finding answers,” I admitted. “Maybe it’s about asking the right questions.

As I typed these final words, I reminded myself why I write.

It’s not for perfection or applause. It’s for connection.

It’s for the moments when a reader says, “This resonates with me.

AGI might one-day master writing mechanics, but it will never have my journey—my struggles, triumphs, and voice.

And that,” I said to myself, a small smile creeping across my face, “is what makes every story I write, including this one, uniquely mine.

Living with AGI in 2030: How Everything Changed

ARTIFICIAL GENERAL INTELLIGENCE (AGI) REVOLUTION

An imaginary future

Image created by ChatGPT

I never thought I’d see the day. Artificial General Intelligence — or AGI as everyone calls it — is now part of everyday life. Back in the 2020s, it felt like something out of a movie.

You’d hear tech people throw around terms like “superintelligence,” and I’d nod along, not understanding.

But now? I’m living it.

And let me tell you, it’s not what I expected — it’s better.

“AGI? What’s That?”

I still remember the first time I heard about AGI hitting the scene.

It was all over the news: “AGI has arrived!” My first thought? Here we go, another overhyped tech buzzword.

It was just another fancy update to those voice assistants who could barely understand me half the time.

But then, over the next few weeks, things started to change—real change.

Hospitals began announcing breakthroughs, governments were talking about smarter cities, and my neighbours were raving about how AGI was making their lives easier.

Mornings Made Simple

Fast forward to today, and AGI is part of my daily routine.

Every morning, my assistant, let’s call it “Genie,” greets me like a friend who knows me a little too well.

Good morning, Mazlan! You didn’t sleep well last night — should I push your 9 a.m. meeting to the afternoon?

I blink at my screen, barely awake. How does it know? Then I glance at my smartwatch, which has been tracking my sleep patterns.

Of course, Genie knows. It’s connected to everything — my watch, calendar, even the temperature of my bedroom.

Yes, please,” I mumble, still groggy.

Genie’s not just a glorified organiser. It gets me.

If I’m feeling stressed, it suggests a quick meditation.

If I’m on a productivity streak, it lines up tasks so I can breeze through them.

It’s like having a personal assistant, life coach, and best friend rolled into one.

“Wait, My Health Is in Check?”

The biggest game-changer for me has been healthcare.

I’ve always been terrible about going to the doctor.

Who has the time? But now, I don’t need to. Genie monitors everything — heart rate, blood pressure, you name it.

Last year, it flagged something unusual with my heartbeat. “It’s probably nothing,” I thought, but Genie insisted I schedule a virtual check-up.

It turns out it wasn’t anything. The doctor said it could’ve turned into something serious if we hadn’t caught it early.

It’s weird, isn’t it? A piece of tech cared about my health more than I did.

And now, I don’t take it for granted. Knowing Genie’s got my back — even for things I can’t see — makes me feel safer.

My Kids Are Thriving

The way my kids learn now blows my mind. Back in school, it was all about memorising facts and fitting them into one-size-fits-all lessons.

But for my kids? AGI creates lessons tailored to them.

My youngest is obsessed with space. She’s learning everything from the physics of black holes to the history of space exploration—all in exciting ways.

The other day, she asked me, “Dad, did you know a black hole can ‘spaghettify’ a star?

Uh, sure,” I said, pretending to know what she was talking about. But inside, I was amazed. She’s learning things I didn’t even know existed at her age and loving it.

Cities That Work for Us

Even the city feels different now.

Remember those awful traffic jams? Gone. AGI manages the flow of self-driving cars so perfectly that I haven’t been stuck in traffic in years.

Buses, trains, even bikes — all move like clockwork.

And energy? Thanks to AGI’s smart grids, my house runs entirely on renewable power. I don’t even think about electricity bills anymore.

Genie ensures everything is efficient.

One day, I asked, “Why haven’t we had a blackout in years?

Genie replied, “Because every kilowatt of energy is optimised, Mazlan.

I didn’t fully understand the science, but I got the point: AGI handles things so well that I don’t have to worry about them.

Rediscovering What Matters

Here’s the surprising part: I’ve rediscovered things I’d forgotten about with AGI taking care of so much.

Like playing my guitar, I used to love strumming Bee Gees songs, but life got in the way. Now, I have time to pick it up again.

And it’s not just me.

My neighbour, a retired engineer, has started painting landscapes. Another friend is finally writing the book he’s been talking about for years. It’s like we’ve all been permitted to dream again.

Not Everything’s Perfect

Of course, not everything about AGI is sunshine and rainbows.

Some people are still trying to figure out who controls it. “What if it gets misused?” they ask. It’s a valid question.

I’ve even joined a few local forums to discuss how AGI should be managed.

Do you think AGI could ever take over?” I asked a tech-savvy friend recently.

Only if we let it,” he replied. “That’s why we need to stay involved.

It’s reassuring to know that while AGI is brilliant, the big decisions still rest with us.

Looking Ahead

As I sit here, writing this on my porch, I can’t help but feel grateful. AGI hasn’t just made life easier — it’s reminded us what it means to be human.

We’re no longer drowning in mundane tasks or endless stress. We have real-time time to connect, create, and enjoy life.

Life in 2030 isn’t perfect, but it feels like a step in the right direction.

For the first time in a long time, I’m excited about what’s next. And that is the greatest gift AGI has given us: hope for the future.

The Crucial Role of Inclusiveness in AI

AI ETHICS

Principle of AI — Inclusiveness

Image create using ChatGPT

I was fascinated by AI’s potential. It seemed like the future, with endless possibilities to revolutionize healthcare, education, and legal systems.

But one thought kept nagging at me: Who benefits from this technology? It dawned on me that if AI only serves a select group, it could widen existing social inequalities. If AI is only built for those with the most access, are we moving forward?

My work with the Internet of Things (IoT) and smart cities has already shown me how technology, while promising to enhance urban living, often caters to those with the resources to use it.

That same realization hit me with AI: AI must be inclusive.

It has to serve everyone, especially the vulnerable, or we risk creating deeper societal divisions. This is why AI must align with the principles of our Federal Constitution, which emphasizes equality, justice, and fairness for all.

Building Inclusiveness into AI Development

The first step to creating inclusive AI is ensuring the systems are designed for everyone, not just the privileged few.

I remember discussing this with a colleague. I asked, “What happens when AI systems in healthcare only use data from urban hospitals that serve wealthier patients?”

We both knew the answer.

Those systems wouldn’t be effective in rural areas, where diseases manifest differently and healthcare resources are more limited.

This example stayed with me. Imagine an AI designed to detect skin cancer, I thought. If it’s only trained on images of light-skinned individuals, what happens when it’s used on darker-skinned patients?

The answer is obvious: it could misdiagnose or fail to identify the condition entirely. Such bias has serious consequences—it could lead to poorer healthcare outcomes for large sections of the population.

That’s why AI systems need diverse data. We can ensure that AI serves everyone equally by training models on datasets that include various skin tones, environments, and lifestyles.

I remember thinking, This is more than just good design — it’s about justice. AI has to reflect the diversity of the people it’s meant to serve, or we’re not living up to our national values of fairness and equality.

Addressing the Needs of Vulnerable Groups

Then, there’s the issue of how AI tools can meet the specific needs of vulnerable populations.

AI should be for more than just those who live in developed, well-connected areas or who can afford the latest technology. It must serve everyone, especially those in need.

One day, I was thinking about the legal system and how difficult it is for many people to get proper legal representation.

I thought, “What if an AI could provide essential legal advice to those who can’t afford a lawyer?” This idea felt like a breakthrough. AI could help people understand their legal rights, assist in drafting contracts, or even generate legal documents.

But then another thought came to mind: What about people who struggle with reading? Or those without reliable internet access?

For AI to be inclusive, it must account for these users.

I imagined an AI legal assistant offering voice guidance for people with lower literacy levels or an AI working offline to reach remote areas. It became clear to me that AI could be the key to equal access to justice—but only if it’s designed to include everyone.

This aligns perfectly with our national principles of fairness and equality.

Ensuring Diversity Among AI Developers

As much as inclusiveness is about the technology itself, it’s also about who is building it. A diverse team of developers brings different perspectives, helping identify and address biases early on.

Are the people building this AI as diverse as those it serves?

Education is a perfect example of how a lack of diversity in AI development can lead to unintended consequences.

I once discussed AI-powered systems for grading student essays. I wondered, “What if the AI is biased towards a specific cultural or linguistic group?

Imagine a system that unintentionally favors students from urban areas who are more familiar with specific cultural references. Students from rural or minority backgrounds could be unfairly marked down simply because the AI doesn’t understand their context.

That’s where a diverse team of developers comes in.

They would bring a broader range of experiences and insights, helping to design AI systems that are fairer and more inclusive.

I pictured a scenario where developers from various backgrounds are involved in creating an AI-powered educational tool. A diverse team would recognize that not all students have the same internet access, so they design the system to work offline or in low-bandwidth environments.

That’s how AI can truly level the playing field for students, I thought. It’s about giving every student an equal chance, no matter where they come from.

Moving Forward with Inclusive AI

As I reflect on the future of AI, one thing becomes clear to me: Inclusiveness is not a choice; it’s a necessity.

If we’re not careful, AI could widen the gaps we want it to close.

That’s why we need to ensure that AI development techniques are inclusive, tools are designed to meet the needs of vulnerable groups, and that teams behind these systems are as diverse as the society they serve.

In my work with IoT and smart cities, I’ve always aimed to make technology accessible to as many people as possible.

The same approach must be taken with AI.

By focusing on inclusiveness, we can ensure AI systems benefit everyone, which aligns with our Federal Constitution and National Principles. This isn’t just about technology; it’s about creating a fairer, more just world.

In the end, I realized that inclusiveness in AI isn’t a luxury—it’s essential.

If we don’t take inclusiveness seriously, we risk creating technology that serves only the privileged and leaves the rest behind.

And that’s not the future I want to build.

The Importance of Transparency in AI

AI ETHICS

Building Trust in AI Software

Image created using ChatGPT

I was fascinated by AI’s power to automate complex tasks, solve problems, and even make decisions that typically require human judgment.

But as I dug deeper into the mechanics of AI, one question kept coming to my mind: How do we ensure that AI is doing what it’s supposed to do?

More importantly, how do we ensure everyone affected by AI decisions understands what’s happening behind the scenes? That’s where the principle of transparency comes into play.

Transparency in AI isn’t just about ensuring the technical aspects are visible to a select group of developers or engineers. It’s about ensuring that the processes and decisions made by AI systems can be explained to all stakeholders — whether they’re technical experts, end users, or decision-makers.

AI must not be a “black box” where decisions are made, but no one understands how or why.

This idea of transparency is essential when AI makes decisions that impact people’s lives. Whether deciding who gets a loan, determining the outcome of a legal case, or even influencing hiring decisions, transparency allows stakeholders to evaluate the risks and address any issues.

Full Disclosure: When AI is Making Decisions

One key aspect of transparency is being upfront when AI is involved in decision-making.

Let’s consider a scenario in the hiring process.

Imagine applying for a job, going through an interview, and later finding out that the final decision on whether you were hired was made by an AI system instead of a human.

I often think about this: Wouldn’t it be frustrating if you didn’t know an AI was involved? That’s why it’s so crucial for companies and organizations to disclose when AI systems are being used in decision-making processes.

People have a right to know if an algorithm is influencing the decisions that affect their lives.

It’s not just a matter of ethics — it’s about trust.

Let’s say a company uses an AI system to screen job applicants. Full disclosure would mean informing applicants upfront that an AI tool is part of the selection process, explaining how it works, and outlining what data it considers.

With this transparency, candidates may feel confident in the outcome, especially if rejected without explanation.

Transparency gives people the opportunity to understand and even challenge decisions if needed.

The Purpose Behind the AI System

Another critical element of transparency is ensuring the AI system’s purpose is clear.

Take, for example, a facial recognition system used in security.

How many people understand the full extent of facial recognition’s purpose? Is it merely for security, or is it also used to track individuals for marketing purposes?

Stakeholders should always be aware of the purpose of the AI systems they interact with. For example, suppose a facial recognition system is used at an airport for security purposes. In that case, passengers must know precisely what the system is doing, what kind of data is being collected, and how it’s being used.

Without this clarity, there’s a risk of misuse or mistrust.

One real-world example is when social media platforms use AI to filter content.

If users are unaware that AI systems are screening and categorizing their posts, they might need to understand why specific posts are taken down or flagged. This lack of transparency can create confusion, making people feel their rights are being violated.

Understanding the Data: Bias and Quality Control

Whenever I think about AI transparency, the issue of training data comes to mind.

AI systems are only as good as the data they’re trained on, but often, the data contains biases that reflect historical or social inequalities. The data must be used in training AI to be disclosed and scrutinized to ensure fairness.

Take the example of AI systems used in the legal system.

Imagine an AI tool designed to predict the likelihood of someone reoffending after being released from prison. If the data used to train the AI is biased — perhaps it overrepresents specific communities — it could lead to unfair outcomes.

What if the AI system was unknowingly biased against a specific demographic? These biases could go unchecked without transparency about the training data, perpetuating discrimination.

In my view, transparency in AI isn’t just about disclosing that AI is being used — it’s also about being open about the data and processes behind it. Stakeholders need to know what historical and social biases might exist in the data, what procedures were used to ensure data quality, and how the AI system was maintained and assessed.

Maintaining and Assessing AI Systems

An often overlooked but equally important aspect of AI transparency is how these systems are maintained over time.

Just because an AI model works well today doesn’t mean it will work as expected tomorrow. What if the data changes or the system starts to degrade over time?

I always think of this in the context of healthcare. Imagine an AI system used to assist doctors in diagnosing patients. The system was trained on medical data several years ago, but medical knowledge and treatments have evolved rapidly. The AI could become updated with regular updates and assessments, leading to accurate diagnoses.

Transparency means informing users about how the AI system works now and keeping them updated on how it’s maintained and monitored over time. This ensures that AI systems remain effective and fair.

The Right to Challenge AI Decisions

Finally, AI transparency must include people’s ability to challenge decisions made by AI systems.

This is crucial for building trust.

If someone feels an AI system has unfairly treated them—say, by denying them a loan or flagging them incorrectly by a security system—they should have the right to question and appeal the decision.

I often ask myself, How would I feel if an AI decided for me, and I had no way to contest it? This is where transparency plays a pivotal role.

It’s not enough for people to know an AI system made a decision — they also need to know how to challenge that decision.

Transparency ensures that AI systems are accountable through a human review process or by providing clear channels for appeals.

Moving Forward with Transparent AI

It’s clear that transparency is not a luxury—it’s a necessity.

Without it, AI systems risk becoming tools people don’t trust or understand. AI must be practical and transparent in its processes, decisions, and data usage to succeed.

Transparency principles — whether they involve disclosing AI’s role in decision-making, clarifying its intended purpose, or allowing for challenges — are essential to building trust in AI systems.

This is the only way to ensure AI systems benefit everyone fairly and responsibly.

Why AI Must Be Fail-Safe: Ensuring Reliability and Human Oversight

PRINCIPLES OF AI

Building Trust in AI: The Power of Reliability, Safety, and Control

Image created using ChatGPT

The Importance of Reliability in AI

As someone who has worked extensively with technology, I’ve always emphasized the importance of reliability in AI systems. Reliability isn’t just a buzzword; it means that AI works as expected under normal and challenging circumstances.

Take the example of autonomous vehicles.

Imagine a self-driving car cruising down the highway on a sunny day — everything seems fine. But what happens when the weather suddenly changes? What if it starts raining heavily or if fog sets in? The car’s AI must remain reliable in identifying obstacles, following traffic rules, and ensuring passenger safety. If the system fails under these conditions, it’s not ready for real-world use.

“Would I trust this system if my safety depended on it?” Developers need to ask themselves this question. Reliability doesn’t mean perfection, but it means that the system does what it was designed to do under most circumstances.

When it encounters unexpected situations, it must still respond appropriately.

Safety in AI: More Than Just a Feature

Safety is crucial to AI, especially when human lives are at stake.

One simple yet powerful example of AI contributing to safety is found in modern vehicles — many now come equipped with AI features like automatic emergency braking.

Imagine you’re driving, and suddenly, the car in front of you stops abruptly. You might not have time to react, but the car’s AI does. It slams on the brakes to avoid a collision.

This shows how AI can enhance safety by making quick, life-saving decisions. However, this only works if the AI system has been thoroughly tested and proven to act reliably in such scenarios.

Fail-safe mechanisms are essential. If an AI system encounters an error or an unexpected situation, it must default to a state that avoids harm. A failure in high-risk environments like healthcare or transportation could lead to catastrophic outcomes. Fail-safe design ensures the system handles the situation without causing damage, even in the worst-case scenario.

I remember a colleague working on a project with industrial robots where safety was a huge concern. The question constantly on my mind was: “What happens if the robot misinterprets its task and causes an accident?”

The solution was to incorporate multiple layers of safety, including emergency stops and manual overrides. These features gave workers the confidence to operate near the robots, knowing they could intervene if necessary.

Controllability: Ensuring Human Oversight

Humans must maintain ultimate control over AI systems in high-risk areas like military applications or autonomous vehicles. While AI can make quick decisions, humans must be able to override the system if something goes wrong.

For example, AI might control drones or weapon systems in military applications. While these systems can make quick, efficient decisions, human judgment and oversight are still crucial.

I often remind myself, “Autonomy doesn’t mean lack of oversight.” AI should be autonomous but never beyond human control.

Maintaining control is not just about trusting the AI; it’s about ensuring that humans can manage and control these systems effectively. AI should work hand-in-hand with human operators, particularly in scenarios where lives are at stake.

The Role of Testing and Certification

Rigorous testing is one of the most critical steps to ensuring reliability, safety, and control in AI. This isn’t a one-time process; it must be ongoing. The real world constantly changes, and AI systems must adapt to new conditions and scenarios.

Developers and end-users should conduct regular certification and risk assessments. These assessments help identify potential weaknesses or vulnerabilities in the system, ensuring that AI meets the necessary reliability, safety, and control standards.

Without these steps, the systems we build won’t inspire trust; without trust, they can never reach their full potential.

Conclusion: Trust Through Testing

The future of AI depends on our ability to trust these systems.

Trust can only be built through robust testing, thoughtful design, and maintaining human control. As I often remind myself, “An AI system that cannot be trusted will never be used to its full potential.”

Trust comes from knowing these systems are reliable, safe, and controllable, even in critical situations.

Adhering to these core principles is essential for AI to thrive in healthcare, autonomous vehicles, or military applications.

Developers must prioritize testing, and users must be confident that they control these systems. Only then will AI be ready for widespread adoption in our everyday lives.

Protecting Your Future: Why AI Security and Privacy Matter

ABOUT ARTIFICIAL INTELLIGENCE (AI)

Security and Privacy — Principle of AI

When we talk about artificial intelligence (AI), one of the most important things to remember is that AI must be private and secure. It’s like driving a car.

You want the car to function properly, keep you safe, and always be in your control.

AI is no different.

These systems must perform as intended and resist tampering, especially by unauthorized parties.

In my experience working with IoT and smart cities, I have seen the risks and benefits of AI, and developers need to ensure that safety and security are built into every system from the beginning.

Let me explain with some simple examples.

Example 1: Self-driving Cars

One of the most exciting advancements in AI is the development of self-driving cars. Imagine a vehicle designed to drive itself from point A to point B.

The promise of these cars is enticing: fewer accidents, no need for human intervention, and efficient traffic management.

But what happens if the AI controlling the car is hacked? What if an unauthorized party can take control and steer the vehicle into danger?

This is where safety and reliability come into play.

The AI system must be designed to resist such interference. Developers must ensure that only authorized individuals can interact with the AI’s decision-making process.

If someone tries to hack into the system, the AI must be able to detect and prevent the intrusion. Without this security, the risk of accidents increases dramatically, and people may lose trust in AI technology.

In my experience with IoT and smart city solutions, we must design systems with these safeguards from the ground up.

AI systems should be tested rigorously under various scenarios to ensure they perform as intended, even in unexpected conditions.

For instance, just as we ensure an IoT device in a smart city responds safely during a power outage, a self-driving car should still behave responsibly if something goes wrong.

Example 2: AI-powered Healthcare Diagnostics

Another powerful application of AI is in healthcare.

AI systems are now being used to assist doctors in diagnosing diseases based on medical images or patient data. Consider how an AI system can analyze thousands of medical scans in seconds to identify potential problems like tumors or heart conditions.

But what if the AI system gives a wrong diagnosis? Or what if someone manipulates the data to favor certain patients while discriminating against others?

Here’s where privacy and data protection become crucial.

Developers must obtain consent before using someone’s personal health data to develop or run an AI system. Patients must know how their data is being used and should have the right to control it.

Data collected for these purposes should never be used to discriminate against patients based on race, gender, or other factors.

Incorporating security-by-design and privacy-by-design principles ensures that data is protected from misuse throughout the AI system’s entire lifecycle.

Developers should also adhere to international data protection standards so patients can trust that their health data is safe and won’t be used unlawfully. As someone who has worked with data from IoT systems, I know how easily personal data can be misused if not handled carefully.

Example 3: AI in Smart Home Devices

Now, let’s look at something more straightforward: smart home devices. Many people use AI-powered gadgets in their homes, like smart thermostats, voice-activated assistants, or security cameras.

These devices collect a lot of personal data.

Imagine if someone could access your security camera without your permission or your voice assistant recorded your conversations and shared them with companies you don’t know about.

Developers of these AI systems must obtain user consent before collecting and using this data. And once the data is collected, it must be protected.

The system should guarantee privacy, meaning the information stays confidential and cannot be accessed by unauthorized parties.

Moreover, the system must be transparent about how the data is used so that users can make informed decisions.

I often tell people that IoT and AI systems are like locks on a door. You wouldn’t leave your front door unlocked for anyone to walk in, right? In the same way, AI systems must lock down data and make sure only the right people have access.

A secure and privacy-conscious design helps build trust with users, which is essential for the widespread adoption of AI technologies.

Final Thoughts

For AI to truly succeed and be embraced by the masses, it must be trustworthy.

We can’t ignore the risks associated with it, but we can mitigate those risks by focusing on safety, security, and privacy. AI systems need to be reliable, and developers should always aim to meet the highest standards in protecting users’ data.

When AI is safe, secure, and controllable, we all stand to benefit from its incredible potential.

In every project I’ve been involved in, from IoT solutions to smart cities, this principle has been at the forefront: build systems that people can trust.

Only then can we realize AI’s full potential in transforming industries, healthcare, and our daily lives.

Understanding Fairness in AI

UNDERSTANDING ARTIFICIAL INTELLIGENCE

How to build trust in AI machines and software.

Image created using ChatGPT

As I explore Artificial Intelligence (AI), one principle resurfaces in almost every conversation: fairness. But what does fairness mean when we talk about AI?

AI systems should be designed and implemented to avoid bias and discrimination. It sounds simple, but the more I think about it, the more complex it becomes. How can we ensure that a machine, learning from data that may contain past biases, remains fair to everyone?

I’ve spent years working in technology, from telecommunications to IoT, and I’ve seen firsthand how tech can change lives.

But what happens when this powerful technology, which is supposed to serve everyone, starts favoring particular groups? That’s the real issue with biased AI. Unfortunately, it’s not just a hypothetical concern—it’s happening all around us.

Is AI fair?” I often ask myself. And the answer, unfortunately, isn’t always “yes.

Example 1: The Recruitment Algorithm

Let me start with an easy-to-grasp scenario. Imagine a company using AI to screen job applicants.

The goal is simple: the AI looks at resumes and selects the best candidates for the job.

It sounds efficient.

But what if the historical data fed into the system reflects past biases? What if, historically, the company has hired more men than women for tech roles?

The AI would begin to learn from this data, thinking that men are more likely to succeed in these roles. The result? The AI starts favoring male candidates, even if female candidates are equally or more qualified.

As I think about this, I realize the real danger isn’t just the immediate bias — it’s the fact that it can perpetuate and amplify over time.

What if this AI system continues being used for years?” I ponder. “How many qualified candidates will be unfairly rejected just because the AI absorbed a biased pattern from the past?

This is why fairness is critical in AI systems.

We need to ensure that the algorithms don’t just mimic the past but actively help us create a more equitable future.

Example 2: AI in Healthcare

Another troubling example is in healthcare.

Imagine an AI system that helps doctors decide who should receive life-saving treatment first. Ideally, it should be a neutral tool that analyzes medical data to determine who is in the most critical condition.

But what if the AI has been trained on data favoring one demographic over another, such as wealthier patients who typically have better access to healthcare?

The AI might then start recommending treatments to wealthier individuals while overlooking those from underprivileged backgrounds who may have just as critical a need.

How can we let this happen in healthcare?” I ask myself. The stakes are too high. It’s a matter of life and death, and if we can’t ensure fairness in these systems, we are failing those who need help the most.

This is why AI fairness isn’t just a technical issue — it’s a moral one.

We’re dealing with real people’s lives, and any bias, no matter how small, can have far-reaching consequences.

Example 3: Facial Recognition and Law Enforcement

Facial recognition technology is another area where fairness is crucial. Several studies have shown that facial recognition systems often struggle to identify people with darker skin tones accurately.

How is this possible?” I ask myself. With all our advancements, how can a system still make such glaring errors?

But then I realized—it all comes back to the data. If the AI were trained primarily on images of lighter-skinned individuals, identifying darker-skinned people would be less accurate. If law enforcement agencies rely on these systems, it can lead to unjust outcomes, such as wrongful arrests or misidentification.

Imagine being misidentified by an AI system just because it wasn’t trained properly,” I think.

The impact of such a failure is profound.

People’s lives can be turned upside down instantly, all because an algorithm wasn’t built with fairness in mind.

The Path Forward

So, how do we ensure fairness in AI?

It starts with the data. We need diverse and representative datasets to train these systems. But it also requires constant vigilance. Even with the best data, biases can creep in through the design or implementation of the AI system itself.

I often remind myself, “It’s not enough to trust that AI will ‘figure it out’ on its own. As developers and users, we have to be proactive in identifying and correcting biases.” It’s a responsibility that we must take seriously, especially as AI becomes more integrated into every aspect of our lives.

For me, fairness in AI is about ensuring that the technology we build serves everyone equally.

It’s about not allowing past biases to shape the future.

It’s about holding ourselves accountable to the highest ethical standards. Only then can we truly unlock AI’s potential in a way that benefits all of humanity.

What it Takes to Build a National AI Centre

This wasn’t just about building a center. It was about building Malaysia’s future.

It all started with a question. “Dr. Mazlan, do you think Malaysia needs a national AI center?

At first, I paused. It was a question I had been grappling with for some time, but hearing it from others made me realize just how urgent the conversation had become. Artificial Intelligence (AI) isn’t just a buzzword anymore; it’s a transformative technology already reshaping industries worldwide. And if we don’t act now, we risk being left behind.

The first time I was asked this question, I remember sitting at a roundtable discussion with some of Malaysia’s top tech leaders. I could feel the weight of the moment. This wasn’t just an academic debate but a call to action.

Yes,” I replied firmly. “We need a national AI center.

But the follow-up questions came quickly. “What does it take to build such a center? How do we ensure its success? What infrastructure do we need? And what about talent? Can Malaysia really compete on a global stage?

I found myself reflecting on my experience building Favoriot. There were striking similarities between the early challenges we faced with IoT and the new hurdles with AI. In both cases, it wasn’t just about the technology. It was about creating an ecosystem where innovation could thrive, talent could flourish, and industries could benefit.

Setting up a national AI center is the same. It’s about creating the right conditions for AI to impact meaningfully across sectors.

The Infrastructure Dilemma

Everyone seems to ask the first question: What infrastructure does a national AI center need?

It’s a fair question I’ve spent much time pondering. From my experience with Favoriot, I learned that infrastructure is the foundation upon which everything else is built. Without suitable systems, you’re doomed to fail before you even begin.

For AI, this means investing heavily in computational power. You can’t have AI without data, and you can’t process that data without high-performance computing. But it’s not just about raw computing power. We must consider the entire data pipeline — from storage and processing to analysis and action.

As I was explaining this to a colleague recently, I compared our early days at Favoriot. “Remember when we first started building our IoT platform?” I asked. “We underestimated how much data we’d need to handle, and we were constantly upgrading our servers. AI will be like that but on a much larger scale.

We’ll need data centers that can scale to handle current demand and future growth. The cloud will be a critical part of this, as will edge computing, particularly for real-time applications. And then there’s the question of connectivity. Malaysia’s digital infrastructure is improving, but there’s still work to be done. We’ll need 5G to ensure the high-speed, low-latency networks that AI applications depend on.

I remember thinking about the logistics of all this. “Where do we even start?” I asked myself. “How do we ensure the infrastructure we build today isn’t obsolete tomorrow?

It’s a daunting challenge but not an impossible one. With the right partnerships — local telcos and international tech companies — we can build the infrastructure an AI center needs to thrive.

Talent: The Heart of AI

As crucial as infrastructure is, it’s not the only thing that matters. The next big question is talent.

Do we have enough AI talent in Malaysia?” someone asked me recently.

I paused. “Not yet,” I admitted. “But we can get there.

Talent will be the most critical factor in determining whether or not a national AI centre succeeds. We need data scientists, machine learning engineers, AI researchers, and a host of other specialists who understand the nuances of AI.

I’ve seen this firsthand at Favoriot. Finding people who understood IoT early on was challenging, and AI is no different. We’re not just competing with local companies for this talent; we’re competing globally. Countries like the US, China, and South Korea are pouring resources into developing their AI talent pools.

But here’s where I’m optimistic. Malaysia has a young, tech-savvy population, and our universities are producing brilliant engineers and data scientists.

What we need is to create pathways for them to specialize in AI.

I remember discussing this with a professor recently. “We need to embed AI into the curriculum at all levels of education,” I said, “ from secondary schools to universities. AI can’t be a niche subject — it must be a core part of our education system.

But education alone isn’t enough.

We need to create opportunities for this talent to grow. That means internships, apprenticeships, and partnerships with the private sector. The National AI Center could act as a hub, connecting students and researchers with industry and giving them real-world problems to solve.

Imagine a place,” I told a colleague, “where students, startups, and multinational companies are all working together, learning from each other, and pushing the boundaries of what AI can do. That’s what the national AI center could be.

Collaboration: The Key to Success

This brings me to the next big question: how do we foster stakeholder collaboration?

This is where the real challenge lies. My experience at Favoriot taught me that collaboration isn’t always easy. There are so many different interests at play — government, industry, academia — and getting everyone on the same page can be challenging. But it’s essential.

Someone recently asked me, “Why do we need a national AI center?” “Why not let the private sector handle AI development?

It’s a valid question and one that I’ve heard many times.

The answer lies in AI itself. AI isn’t just another technology; it’s a general-purpose technology that will impact every sector, from healthcare and education to finance and agriculture. No single entity can build an AI ecosystem independently; it requires collaboration.

The National AI Center would be a place where different stakeholders come together. The government could set policies and regulations that ensure AI is developed and used ethically. Universities could focus on research and training. Startups could experiment with new AI applications, and large corporations could scale those innovations.

Think about it,” I told a friend recently. “If we can bring together the best minds from government, academia, and industry, we can create something truly special — a place where innovation happens at the intersection of different perspectives.”

The Benefits for Industry and Startups

One of the most exciting aspects of setting up a national AI center is the potential benefits for industry and startups.

When I first started Favoriot, I envisioned how IoT could transform industries in Malaysia. And while it took time, we now see that vision come to life. AI is poised to have a similar, if not more significant, impact.

The national AI center could provide a platform for established industries to experiment with new AI technologies without investing in expensive infrastructure. Imagine a manufacturing company collaborating with AI researchers to develop predictive maintenance algorithms or a healthcare provider working with data scientists to create personalized treatment plans using AI.

The possibilities are endless.

And for startups? The National AI Center could be a game-changer. Startups often have brilliant ideas but need more resources to bring those ideas to life. The AI center could provide them with the computational power, data, and expertise they need to scale their innovations.

I’ve seen how difficult it can be for startups to break into traditionally slow industries to adopt new technologies. However, with the support of a national AI center, those barriers could be lowered. Startups could test their ideas, get feedback from industry leaders, and scale their solutions faster.

I remember talking to a startup founder recently who was working on an AI-powered solution for agriculture. “We have the technology,” he told me, “but we need access to data and the right partners to scale.”

That’s where the National AI Centre comes in. It would act as a bridge, connecting startups with the data, infrastructure, and partnerships they need to succeed.

A Vision for the Future

As I sit here, reflecting on these conversations, I can’t help but feel a sense of urgency. The world is moving quickly, and AI will be at the heart of that change. Malaysia has the potential to lead, but only if we act now.

Can we do this?” I asked myself one evening as I sketched out ideas for the center. The answer is yes. However, it will require a concerted effort from government, industry, academia, and startups.

Setting up a national AI center is a bold vision, but it can transform Malaysia into a leader in AI innovation. With the proper infrastructure, talent, and collaborations, we can create an AI ecosystem that benefits everyone — industries, startups, and the nation.

When we look back in a few years, I believe we’ll see that this wasn’t just about building a center. It was about building Malaysia’s future.