The Crucial Role of Inclusiveness in AI

AI ETHICS

Principle of AI — Inclusiveness

Image create using ChatGPT

I was fascinated by AI’s potential. It seemed like the future, with endless possibilities to revolutionize healthcare, education, and legal systems.

But one thought kept nagging at me: Who benefits from this technology? It dawned on me that if AI only serves a select group, it could widen existing social inequalities. If AI is only built for those with the most access, are we moving forward?

My work with the Internet of Things (IoT) and smart cities has already shown me how technology, while promising to enhance urban living, often caters to those with the resources to use it.

That same realization hit me with AI: AI must be inclusive.

It has to serve everyone, especially the vulnerable, or we risk creating deeper societal divisions. This is why AI must align with the principles of our Federal Constitution, which emphasizes equality, justice, and fairness for all.

Building Inclusiveness into AI Development

The first step to creating inclusive AI is ensuring the systems are designed for everyone, not just the privileged few.

I remember discussing this with a colleague. I asked, “What happens when AI systems in healthcare only use data from urban hospitals that serve wealthier patients?”

We both knew the answer.

Those systems wouldn’t be effective in rural areas, where diseases manifest differently and healthcare resources are more limited.

This example stayed with me. Imagine an AI designed to detect skin cancer, I thought. If it’s only trained on images of light-skinned individuals, what happens when it’s used on darker-skinned patients?

The answer is obvious: it could misdiagnose or fail to identify the condition entirely. Such bias has serious consequences—it could lead to poorer healthcare outcomes for large sections of the population.

That’s why AI systems need diverse data. We can ensure that AI serves everyone equally by training models on datasets that include various skin tones, environments, and lifestyles.

I remember thinking, This is more than just good design — it’s about justice. AI has to reflect the diversity of the people it’s meant to serve, or we’re not living up to our national values of fairness and equality.

Addressing the Needs of Vulnerable Groups

Then, there’s the issue of how AI tools can meet the specific needs of vulnerable populations.

AI should be for more than just those who live in developed, well-connected areas or who can afford the latest technology. It must serve everyone, especially those in need.

One day, I was thinking about the legal system and how difficult it is for many people to get proper legal representation.

I thought, “What if an AI could provide essential legal advice to those who can’t afford a lawyer?” This idea felt like a breakthrough. AI could help people understand their legal rights, assist in drafting contracts, or even generate legal documents.

But then another thought came to mind: What about people who struggle with reading? Or those without reliable internet access?

For AI to be inclusive, it must account for these users.

I imagined an AI legal assistant offering voice guidance for people with lower literacy levels or an AI working offline to reach remote areas. It became clear to me that AI could be the key to equal access to justice—but only if it’s designed to include everyone.

This aligns perfectly with our national principles of fairness and equality.

Ensuring Diversity Among AI Developers

As much as inclusiveness is about the technology itself, it’s also about who is building it. A diverse team of developers brings different perspectives, helping identify and address biases early on.

Are the people building this AI as diverse as those it serves?

Education is a perfect example of how a lack of diversity in AI development can lead to unintended consequences.

I once discussed AI-powered systems for grading student essays. I wondered, “What if the AI is biased towards a specific cultural or linguistic group?

Imagine a system that unintentionally favors students from urban areas who are more familiar with specific cultural references. Students from rural or minority backgrounds could be unfairly marked down simply because the AI doesn’t understand their context.

That’s where a diverse team of developers comes in.

They would bring a broader range of experiences and insights, helping to design AI systems that are fairer and more inclusive.

I pictured a scenario where developers from various backgrounds are involved in creating an AI-powered educational tool. A diverse team would recognize that not all students have the same internet access, so they design the system to work offline or in low-bandwidth environments.

That’s how AI can truly level the playing field for students, I thought. It’s about giving every student an equal chance, no matter where they come from.

Moving Forward with Inclusive AI

As I reflect on the future of AI, one thing becomes clear to me: Inclusiveness is not a choice; it’s a necessity.

If we’re not careful, AI could widen the gaps we want it to close.

That’s why we need to ensure that AI development techniques are inclusive, tools are designed to meet the needs of vulnerable groups, and that teams behind these systems are as diverse as the society they serve.

In my work with IoT and smart cities, I’ve always aimed to make technology accessible to as many people as possible.

The same approach must be taken with AI.

By focusing on inclusiveness, we can ensure AI systems benefit everyone, which aligns with our Federal Constitution and National Principles. This isn’t just about technology; it’s about creating a fairer, more just world.

In the end, I realized that inclusiveness in AI isn’t a luxury—it’s essential.

If we don’t take inclusiveness seriously, we risk creating technology that serves only the privileged and leaves the rest behind.

And that’s not the future I want to build.

The Importance of Transparency in AI

AI ETHICS

Building Trust in AI Software

Image created using ChatGPT

I was fascinated by AI’s power to automate complex tasks, solve problems, and even make decisions that typically require human judgment.

But as I dug deeper into the mechanics of AI, one question kept coming to my mind: How do we ensure that AI is doing what it’s supposed to do?

More importantly, how do we ensure everyone affected by AI decisions understands what’s happening behind the scenes? That’s where the principle of transparency comes into play.

Transparency in AI isn’t just about ensuring the technical aspects are visible to a select group of developers or engineers. It’s about ensuring that the processes and decisions made by AI systems can be explained to all stakeholders — whether they’re technical experts, end users, or decision-makers.

AI must not be a “black box” where decisions are made, but no one understands how or why.

This idea of transparency is essential when AI makes decisions that impact people’s lives. Whether deciding who gets a loan, determining the outcome of a legal case, or even influencing hiring decisions, transparency allows stakeholders to evaluate the risks and address any issues.

Full Disclosure: When AI is Making Decisions

One key aspect of transparency is being upfront when AI is involved in decision-making.

Let’s consider a scenario in the hiring process.

Imagine applying for a job, going through an interview, and later finding out that the final decision on whether you were hired was made by an AI system instead of a human.

I often think about this: Wouldn’t it be frustrating if you didn’t know an AI was involved? That’s why it’s so crucial for companies and organizations to disclose when AI systems are being used in decision-making processes.

People have a right to know if an algorithm is influencing the decisions that affect their lives.

It’s not just a matter of ethics — it’s about trust.

Let’s say a company uses an AI system to screen job applicants. Full disclosure would mean informing applicants upfront that an AI tool is part of the selection process, explaining how it works, and outlining what data it considers.

With this transparency, candidates may feel confident in the outcome, especially if rejected without explanation.

Transparency gives people the opportunity to understand and even challenge decisions if needed.

The Purpose Behind the AI System

Another critical element of transparency is ensuring the AI system’s purpose is clear.

Take, for example, a facial recognition system used in security.

How many people understand the full extent of facial recognition’s purpose? Is it merely for security, or is it also used to track individuals for marketing purposes?

Stakeholders should always be aware of the purpose of the AI systems they interact with. For example, suppose a facial recognition system is used at an airport for security purposes. In that case, passengers must know precisely what the system is doing, what kind of data is being collected, and how it’s being used.

Without this clarity, there’s a risk of misuse or mistrust.

One real-world example is when social media platforms use AI to filter content.

If users are unaware that AI systems are screening and categorizing their posts, they might need to understand why specific posts are taken down or flagged. This lack of transparency can create confusion, making people feel their rights are being violated.

Understanding the Data: Bias and Quality Control

Whenever I think about AI transparency, the issue of training data comes to mind.

AI systems are only as good as the data they’re trained on, but often, the data contains biases that reflect historical or social inequalities. The data must be used in training AI to be disclosed and scrutinized to ensure fairness.

Take the example of AI systems used in the legal system.

Imagine an AI tool designed to predict the likelihood of someone reoffending after being released from prison. If the data used to train the AI is biased — perhaps it overrepresents specific communities — it could lead to unfair outcomes.

What if the AI system was unknowingly biased against a specific demographic? These biases could go unchecked without transparency about the training data, perpetuating discrimination.

In my view, transparency in AI isn’t just about disclosing that AI is being used — it’s also about being open about the data and processes behind it. Stakeholders need to know what historical and social biases might exist in the data, what procedures were used to ensure data quality, and how the AI system was maintained and assessed.

Maintaining and Assessing AI Systems

An often overlooked but equally important aspect of AI transparency is how these systems are maintained over time.

Just because an AI model works well today doesn’t mean it will work as expected tomorrow. What if the data changes or the system starts to degrade over time?

I always think of this in the context of healthcare. Imagine an AI system used to assist doctors in diagnosing patients. The system was trained on medical data several years ago, but medical knowledge and treatments have evolved rapidly. The AI could become updated with regular updates and assessments, leading to accurate diagnoses.

Transparency means informing users about how the AI system works now and keeping them updated on how it’s maintained and monitored over time. This ensures that AI systems remain effective and fair.

The Right to Challenge AI Decisions

Finally, AI transparency must include people’s ability to challenge decisions made by AI systems.

This is crucial for building trust.

If someone feels an AI system has unfairly treated them—say, by denying them a loan or flagging them incorrectly by a security system—they should have the right to question and appeal the decision.

I often ask myself, How would I feel if an AI decided for me, and I had no way to contest it? This is where transparency plays a pivotal role.

It’s not enough for people to know an AI system made a decision — they also need to know how to challenge that decision.

Transparency ensures that AI systems are accountable through a human review process or by providing clear channels for appeals.

Moving Forward with Transparent AI

It’s clear that transparency is not a luxury—it’s a necessity.

Without it, AI systems risk becoming tools people don’t trust or understand. AI must be practical and transparent in its processes, decisions, and data usage to succeed.

Transparency principles — whether they involve disclosing AI’s role in decision-making, clarifying its intended purpose, or allowing for challenges — are essential to building trust in AI systems.

This is the only way to ensure AI systems benefit everyone fairly and responsibly.

Why AI Must Be Fail-Safe: Ensuring Reliability and Human Oversight

PRINCIPLES OF AI

Building Trust in AI: The Power of Reliability, Safety, and Control

Image created using ChatGPT

The Importance of Reliability in AI

As someone who has worked extensively with technology, I’ve always emphasized the importance of reliability in AI systems. Reliability isn’t just a buzzword; it means that AI works as expected under normal and challenging circumstances.

Take the example of autonomous vehicles.

Imagine a self-driving car cruising down the highway on a sunny day — everything seems fine. But what happens when the weather suddenly changes? What if it starts raining heavily or if fog sets in? The car’s AI must remain reliable in identifying obstacles, following traffic rules, and ensuring passenger safety. If the system fails under these conditions, it’s not ready for real-world use.

“Would I trust this system if my safety depended on it?” Developers need to ask themselves this question. Reliability doesn’t mean perfection, but it means that the system does what it was designed to do under most circumstances.

When it encounters unexpected situations, it must still respond appropriately.

Safety in AI: More Than Just a Feature

Safety is crucial to AI, especially when human lives are at stake.

One simple yet powerful example of AI contributing to safety is found in modern vehicles — many now come equipped with AI features like automatic emergency braking.

Imagine you’re driving, and suddenly, the car in front of you stops abruptly. You might not have time to react, but the car’s AI does. It slams on the brakes to avoid a collision.

This shows how AI can enhance safety by making quick, life-saving decisions. However, this only works if the AI system has been thoroughly tested and proven to act reliably in such scenarios.

Fail-safe mechanisms are essential. If an AI system encounters an error or an unexpected situation, it must default to a state that avoids harm. A failure in high-risk environments like healthcare or transportation could lead to catastrophic outcomes. Fail-safe design ensures the system handles the situation without causing damage, even in the worst-case scenario.

I remember a colleague working on a project with industrial robots where safety was a huge concern. The question constantly on my mind was: “What happens if the robot misinterprets its task and causes an accident?”

The solution was to incorporate multiple layers of safety, including emergency stops and manual overrides. These features gave workers the confidence to operate near the robots, knowing they could intervene if necessary.

Controllability: Ensuring Human Oversight

Humans must maintain ultimate control over AI systems in high-risk areas like military applications or autonomous vehicles. While AI can make quick decisions, humans must be able to override the system if something goes wrong.

For example, AI might control drones or weapon systems in military applications. While these systems can make quick, efficient decisions, human judgment and oversight are still crucial.

I often remind myself, “Autonomy doesn’t mean lack of oversight.” AI should be autonomous but never beyond human control.

Maintaining control is not just about trusting the AI; it’s about ensuring that humans can manage and control these systems effectively. AI should work hand-in-hand with human operators, particularly in scenarios where lives are at stake.

The Role of Testing and Certification

Rigorous testing is one of the most critical steps to ensuring reliability, safety, and control in AI. This isn’t a one-time process; it must be ongoing. The real world constantly changes, and AI systems must adapt to new conditions and scenarios.

Developers and end-users should conduct regular certification and risk assessments. These assessments help identify potential weaknesses or vulnerabilities in the system, ensuring that AI meets the necessary reliability, safety, and control standards.

Without these steps, the systems we build won’t inspire trust; without trust, they can never reach their full potential.

Conclusion: Trust Through Testing

The future of AI depends on our ability to trust these systems.

Trust can only be built through robust testing, thoughtful design, and maintaining human control. As I often remind myself, “An AI system that cannot be trusted will never be used to its full potential.”

Trust comes from knowing these systems are reliable, safe, and controllable, even in critical situations.

Adhering to these core principles is essential for AI to thrive in healthcare, autonomous vehicles, or military applications.

Developers must prioritize testing, and users must be confident that they control these systems. Only then will AI be ready for widespread adoption in our everyday lives.

Protecting Your Future: Why AI Security and Privacy Matter

ABOUT ARTIFICIAL INTELLIGENCE (AI)

Security and Privacy — Principle of AI

When we talk about artificial intelligence (AI), one of the most important things to remember is that AI must be private and secure. It’s like driving a car.

You want the car to function properly, keep you safe, and always be in your control.

AI is no different.

These systems must perform as intended and resist tampering, especially by unauthorized parties.

In my experience working with IoT and smart cities, I have seen the risks and benefits of AI, and developers need to ensure that safety and security are built into every system from the beginning.

Let me explain with some simple examples.

Example 1: Self-driving Cars

One of the most exciting advancements in AI is the development of self-driving cars. Imagine a vehicle designed to drive itself from point A to point B.

The promise of these cars is enticing: fewer accidents, no need for human intervention, and efficient traffic management.

But what happens if the AI controlling the car is hacked? What if an unauthorized party can take control and steer the vehicle into danger?

This is where safety and reliability come into play.

The AI system must be designed to resist such interference. Developers must ensure that only authorized individuals can interact with the AI’s decision-making process.

If someone tries to hack into the system, the AI must be able to detect and prevent the intrusion. Without this security, the risk of accidents increases dramatically, and people may lose trust in AI technology.

In my experience with IoT and smart city solutions, we must design systems with these safeguards from the ground up.

AI systems should be tested rigorously under various scenarios to ensure they perform as intended, even in unexpected conditions.

For instance, just as we ensure an IoT device in a smart city responds safely during a power outage, a self-driving car should still behave responsibly if something goes wrong.

Example 2: AI-powered Healthcare Diagnostics

Another powerful application of AI is in healthcare.

AI systems are now being used to assist doctors in diagnosing diseases based on medical images or patient data. Consider how an AI system can analyze thousands of medical scans in seconds to identify potential problems like tumors or heart conditions.

But what if the AI system gives a wrong diagnosis? Or what if someone manipulates the data to favor certain patients while discriminating against others?

Here’s where privacy and data protection become crucial.

Developers must obtain consent before using someone’s personal health data to develop or run an AI system. Patients must know how their data is being used and should have the right to control it.

Data collected for these purposes should never be used to discriminate against patients based on race, gender, or other factors.

Incorporating security-by-design and privacy-by-design principles ensures that data is protected from misuse throughout the AI system’s entire lifecycle.

Developers should also adhere to international data protection standards so patients can trust that their health data is safe and won’t be used unlawfully. As someone who has worked with data from IoT systems, I know how easily personal data can be misused if not handled carefully.

Example 3: AI in Smart Home Devices

Now, let’s look at something more straightforward: smart home devices. Many people use AI-powered gadgets in their homes, like smart thermostats, voice-activated assistants, or security cameras.

These devices collect a lot of personal data.

Imagine if someone could access your security camera without your permission or your voice assistant recorded your conversations and shared them with companies you don’t know about.

Developers of these AI systems must obtain user consent before collecting and using this data. And once the data is collected, it must be protected.

The system should guarantee privacy, meaning the information stays confidential and cannot be accessed by unauthorized parties.

Moreover, the system must be transparent about how the data is used so that users can make informed decisions.

I often tell people that IoT and AI systems are like locks on a door. You wouldn’t leave your front door unlocked for anyone to walk in, right? In the same way, AI systems must lock down data and make sure only the right people have access.

A secure and privacy-conscious design helps build trust with users, which is essential for the widespread adoption of AI technologies.

Final Thoughts

For AI to truly succeed and be embraced by the masses, it must be trustworthy.

We can’t ignore the risks associated with it, but we can mitigate those risks by focusing on safety, security, and privacy. AI systems need to be reliable, and developers should always aim to meet the highest standards in protecting users’ data.

When AI is safe, secure, and controllable, we all stand to benefit from its incredible potential.

In every project I’ve been involved in, from IoT solutions to smart cities, this principle has been at the forefront: build systems that people can trust.

Only then can we realize AI’s full potential in transforming industries, healthcare, and our daily lives.

Understanding Fairness in AI

UNDERSTANDING ARTIFICIAL INTELLIGENCE

How to build trust in AI machines and software.

Image created using ChatGPT

As I explore Artificial Intelligence (AI), one principle resurfaces in almost every conversation: fairness. But what does fairness mean when we talk about AI?

AI systems should be designed and implemented to avoid bias and discrimination. It sounds simple, but the more I think about it, the more complex it becomes. How can we ensure that a machine, learning from data that may contain past biases, remains fair to everyone?

I’ve spent years working in technology, from telecommunications to IoT, and I’ve seen firsthand how tech can change lives.

But what happens when this powerful technology, which is supposed to serve everyone, starts favoring particular groups? That’s the real issue with biased AI. Unfortunately, it’s not just a hypothetical concern—it’s happening all around us.

Is AI fair?” I often ask myself. And the answer, unfortunately, isn’t always “yes.

Example 1: The Recruitment Algorithm

Let me start with an easy-to-grasp scenario. Imagine a company using AI to screen job applicants.

The goal is simple: the AI looks at resumes and selects the best candidates for the job.

It sounds efficient.

But what if the historical data fed into the system reflects past biases? What if, historically, the company has hired more men than women for tech roles?

The AI would begin to learn from this data, thinking that men are more likely to succeed in these roles. The result? The AI starts favoring male candidates, even if female candidates are equally or more qualified.

As I think about this, I realize the real danger isn’t just the immediate bias — it’s the fact that it can perpetuate and amplify over time.

What if this AI system continues being used for years?” I ponder. “How many qualified candidates will be unfairly rejected just because the AI absorbed a biased pattern from the past?

This is why fairness is critical in AI systems.

We need to ensure that the algorithms don’t just mimic the past but actively help us create a more equitable future.

Example 2: AI in Healthcare

Another troubling example is in healthcare.

Imagine an AI system that helps doctors decide who should receive life-saving treatment first. Ideally, it should be a neutral tool that analyzes medical data to determine who is in the most critical condition.

But what if the AI has been trained on data favoring one demographic over another, such as wealthier patients who typically have better access to healthcare?

The AI might then start recommending treatments to wealthier individuals while overlooking those from underprivileged backgrounds who may have just as critical a need.

How can we let this happen in healthcare?” I ask myself. The stakes are too high. It’s a matter of life and death, and if we can’t ensure fairness in these systems, we are failing those who need help the most.

This is why AI fairness isn’t just a technical issue — it’s a moral one.

We’re dealing with real people’s lives, and any bias, no matter how small, can have far-reaching consequences.

Example 3: Facial Recognition and Law Enforcement

Facial recognition technology is another area where fairness is crucial. Several studies have shown that facial recognition systems often struggle to identify people with darker skin tones accurately.

How is this possible?” I ask myself. With all our advancements, how can a system still make such glaring errors?

But then I realized—it all comes back to the data. If the AI were trained primarily on images of lighter-skinned individuals, identifying darker-skinned people would be less accurate. If law enforcement agencies rely on these systems, it can lead to unjust outcomes, such as wrongful arrests or misidentification.

Imagine being misidentified by an AI system just because it wasn’t trained properly,” I think.

The impact of such a failure is profound.

People’s lives can be turned upside down instantly, all because an algorithm wasn’t built with fairness in mind.

The Path Forward

So, how do we ensure fairness in AI?

It starts with the data. We need diverse and representative datasets to train these systems. But it also requires constant vigilance. Even with the best data, biases can creep in through the design or implementation of the AI system itself.

I often remind myself, “It’s not enough to trust that AI will ‘figure it out’ on its own. As developers and users, we have to be proactive in identifying and correcting biases.” It’s a responsibility that we must take seriously, especially as AI becomes more integrated into every aspect of our lives.

For me, fairness in AI is about ensuring that the technology we build serves everyone equally.

It’s about not allowing past biases to shape the future.

It’s about holding ourselves accountable to the highest ethical standards. Only then can we truly unlock AI’s potential in a way that benefits all of humanity.

What it Takes to Build a National AI Centre

This wasn’t just about building a center. It was about building Malaysia’s future.

It all started with a question. “Dr. Mazlan, do you think Malaysia needs a national AI center?

At first, I paused. It was a question I had been grappling with for some time, but hearing it from others made me realize just how urgent the conversation had become. Artificial Intelligence (AI) isn’t just a buzzword anymore; it’s a transformative technology already reshaping industries worldwide. And if we don’t act now, we risk being left behind.

The first time I was asked this question, I remember sitting at a roundtable discussion with some of Malaysia’s top tech leaders. I could feel the weight of the moment. This wasn’t just an academic debate but a call to action.

Yes,” I replied firmly. “We need a national AI center.

But the follow-up questions came quickly. “What does it take to build such a center? How do we ensure its success? What infrastructure do we need? And what about talent? Can Malaysia really compete on a global stage?

I found myself reflecting on my experience building Favoriot. There were striking similarities between the early challenges we faced with IoT and the new hurdles with AI. In both cases, it wasn’t just about the technology. It was about creating an ecosystem where innovation could thrive, talent could flourish, and industries could benefit.

Setting up a national AI center is the same. It’s about creating the right conditions for AI to impact meaningfully across sectors.

The Infrastructure Dilemma

Everyone seems to ask the first question: What infrastructure does a national AI center need?

It’s a fair question I’ve spent much time pondering. From my experience with Favoriot, I learned that infrastructure is the foundation upon which everything else is built. Without suitable systems, you’re doomed to fail before you even begin.

For AI, this means investing heavily in computational power. You can’t have AI without data, and you can’t process that data without high-performance computing. But it’s not just about raw computing power. We must consider the entire data pipeline — from storage and processing to analysis and action.

As I was explaining this to a colleague recently, I compared our early days at Favoriot. “Remember when we first started building our IoT platform?” I asked. “We underestimated how much data we’d need to handle, and we were constantly upgrading our servers. AI will be like that but on a much larger scale.

We’ll need data centers that can scale to handle current demand and future growth. The cloud will be a critical part of this, as will edge computing, particularly for real-time applications. And then there’s the question of connectivity. Malaysia’s digital infrastructure is improving, but there’s still work to be done. We’ll need 5G to ensure the high-speed, low-latency networks that AI applications depend on.

I remember thinking about the logistics of all this. “Where do we even start?” I asked myself. “How do we ensure the infrastructure we build today isn’t obsolete tomorrow?

It’s a daunting challenge but not an impossible one. With the right partnerships — local telcos and international tech companies — we can build the infrastructure an AI center needs to thrive.

Talent: The Heart of AI

As crucial as infrastructure is, it’s not the only thing that matters. The next big question is talent.

Do we have enough AI talent in Malaysia?” someone asked me recently.

I paused. “Not yet,” I admitted. “But we can get there.

Talent will be the most critical factor in determining whether or not a national AI centre succeeds. We need data scientists, machine learning engineers, AI researchers, and a host of other specialists who understand the nuances of AI.

I’ve seen this firsthand at Favoriot. Finding people who understood IoT early on was challenging, and AI is no different. We’re not just competing with local companies for this talent; we’re competing globally. Countries like the US, China, and South Korea are pouring resources into developing their AI talent pools.

But here’s where I’m optimistic. Malaysia has a young, tech-savvy population, and our universities are producing brilliant engineers and data scientists.

What we need is to create pathways for them to specialize in AI.

I remember discussing this with a professor recently. “We need to embed AI into the curriculum at all levels of education,” I said, “ from secondary schools to universities. AI can’t be a niche subject — it must be a core part of our education system.

But education alone isn’t enough.

We need to create opportunities for this talent to grow. That means internships, apprenticeships, and partnerships with the private sector. The National AI Center could act as a hub, connecting students and researchers with industry and giving them real-world problems to solve.

Imagine a place,” I told a colleague, “where students, startups, and multinational companies are all working together, learning from each other, and pushing the boundaries of what AI can do. That’s what the national AI center could be.

Collaboration: The Key to Success

This brings me to the next big question: how do we foster stakeholder collaboration?

This is where the real challenge lies. My experience at Favoriot taught me that collaboration isn’t always easy. There are so many different interests at play — government, industry, academia — and getting everyone on the same page can be challenging. But it’s essential.

Someone recently asked me, “Why do we need a national AI center?” “Why not let the private sector handle AI development?

It’s a valid question and one that I’ve heard many times.

The answer lies in AI itself. AI isn’t just another technology; it’s a general-purpose technology that will impact every sector, from healthcare and education to finance and agriculture. No single entity can build an AI ecosystem independently; it requires collaboration.

The National AI Center would be a place where different stakeholders come together. The government could set policies and regulations that ensure AI is developed and used ethically. Universities could focus on research and training. Startups could experiment with new AI applications, and large corporations could scale those innovations.

Think about it,” I told a friend recently. “If we can bring together the best minds from government, academia, and industry, we can create something truly special — a place where innovation happens at the intersection of different perspectives.”

The Benefits for Industry and Startups

One of the most exciting aspects of setting up a national AI center is the potential benefits for industry and startups.

When I first started Favoriot, I envisioned how IoT could transform industries in Malaysia. And while it took time, we now see that vision come to life. AI is poised to have a similar, if not more significant, impact.

The national AI center could provide a platform for established industries to experiment with new AI technologies without investing in expensive infrastructure. Imagine a manufacturing company collaborating with AI researchers to develop predictive maintenance algorithms or a healthcare provider working with data scientists to create personalized treatment plans using AI.

The possibilities are endless.

And for startups? The National AI Center could be a game-changer. Startups often have brilliant ideas but need more resources to bring those ideas to life. The AI center could provide them with the computational power, data, and expertise they need to scale their innovations.

I’ve seen how difficult it can be for startups to break into traditionally slow industries to adopt new technologies. However, with the support of a national AI center, those barriers could be lowered. Startups could test their ideas, get feedback from industry leaders, and scale their solutions faster.

I remember talking to a startup founder recently who was working on an AI-powered solution for agriculture. “We have the technology,” he told me, “but we need access to data and the right partners to scale.”

That’s where the National AI Centre comes in. It would act as a bridge, connecting startups with the data, infrastructure, and partnerships they need to succeed.

A Vision for the Future

As I sit here, reflecting on these conversations, I can’t help but feel a sense of urgency. The world is moving quickly, and AI will be at the heart of that change. Malaysia has the potential to lead, but only if we act now.

Can we do this?” I asked myself one evening as I sketched out ideas for the center. The answer is yes. However, it will require a concerted effort from government, industry, academia, and startups.

Setting up a national AI center is a bold vision, but it can transform Malaysia into a leader in AI innovation. With the proper infrastructure, talent, and collaborations, we can create an AI ecosystem that benefits everyone — industries, startups, and the nation.

When we look back in a few years, I believe we’ll see that this wasn’t just about building a center. It was about building Malaysia’s future.

Connected World Song

MUSIC BY AI

Song & Lyrics

Connected World Song — Dedicated to IoT Enthusiasts

I generated this beautiful song, “Connected World,” using an app called Suno AI. Use a prompt to create the lyrics and describe the style of music you like.


[Verse] 
Wires crossing 
Signals talk 
In a world that never sleeps 
Tech is here 
Future walks

[Verse 2] 
Smart cities 
Bright and clear 
IoT AI blend 
Transformations 
Drawing near

[Chorus] 
Connected world 
Flying high 
Data streams that paint the sky 
In our hands 
Change is nigh 
Future dreams can never die

[Verse 3] 
Sensors pulse 
Code ignite 
Every moment is aligned 
Nodes that sparkle in the night

[Bridge] 
Artificial 
Brings to life 
Possibilities so wide 
Changing minds 
In every stride

[Chorus] 
Connected world 
Flying high 
Data streams that paint the sky 
In our hands 
Change is nigh 
Future dreams can never die


Listen to this song — Connected World

The ChatGPT Millionaire — Book Review

By Mazlan Abbas

Making Money with AI

The ChatGPT Millionaire

Not a member? Click here to read this story for free.

Having dabbled with ChatGPT before, I approached “The ChatGPT Millionaire” with a mix of skepticism and curiosity.

While I’ve had my share of exposure to AI and its utilities, this book promised to offer new insights and methods for monetizing this tool, particularly with GPT-4.

Here’s my take on whether it lives up to its promise.

The ChatGPT Millionaire

Surprisingly Insightful!

From the start, the book emphasizes creating value with minimal effort — a proposition too tempting to ignore.

The idea of generating passive income by leveraging AI is not just enticing but revolutionary in today’s gig economy.

What struck me was the simplicity of the guide. It’s as if the author is holding your hand and walking you through each step with ease.

“Can I really create a sustainable income source this easily?” I wondered initially.

But as I delved deeper, the book’s pragmatic approach and clear, actionable steps dispelled my doubts.

It’s not just about making quick money; it’s about smart, efficient work capitalizing on the current market gap in AI utilization.

Practical and Actionable

One of the book’s strengths is its practicality.

The section on impressing clients by delivering high-quality work at lightning speed resonated with me.

As someone who values efficiency and quality, I believe the strategies outlined here are game-changers.

For instance, the guide on creating engaging content provides not just the ‘how-to’ but also the ‘why’ — a crucial aspect often overlooked in similar guides.

As I experimented with the “Act as” prompts provided, I was amazed at the versatility and adaptability of GPT-4.

The prompts are not just instructions but keys to unlocking the AI’s potential in various niches, from writing to coding to social media management.

Beyond Theoretical Knowledge

The book offers real-world applications and examples, making the leap from theory to practice seamless.

The narrative of becoming a “superhuman freelancer” isn’t far-fetched when you see the tangible examples and templates provided.

It’s empowering to envision oneself completing tasks with such efficiency and precision, thanks to AI.

Yet, it’s not all roses.

The author doesn’t shy away from discussing the limitations of ChatGPT, grounding the book in reality.

This transparency builds trust and sets realistic expectations—a crucial element in any guide that aspires to be actionable and reliable.

The ChatGPT Millionaire

The Verdict — A Must-Read for Aspiring Entrepreneurs

In conclusion, “The ChatGPT Millionaire” is more than just a manual; it’s a blueprint for tapping into an emerging market.

Whether you’re a seasoned freelancer or a newbie looking for innovative income streams, this book has something for you.

Its simplicity, depth, and practicality blend make it a standout resource.

While I entered somewhat skeptical, I emerged enlightened and excited about the possibilities.

This isn’t just a book; it’s a great way to transform how we approach work and income generation in the AI age.

The added bonus materials only sweeten the deal, offering ongoing value and a toolkit for success.

So, would I recommend it? Absolutely.

Whether you’re looking to enhance your productivity, expand your business, or just curious about AI’s potential, “The ChatGPT Millionaire” is a treasure trove of insights and strategies that promise to inform and transform.


Get the book The ChatGPT Millionaire from Amazon.

(This article is comprised of Affiliate links, which I will earn a commission if you decide to purchase)

Create Canva Posters vs Write with AI-Assistance — What’s the Difference?

The Creative Shift

Transforming My Approach to Content Creation

Why can’t I use AI to aid in my writing? Photo from Pexels

Two tools have dramatically reshaped the landscape for people like me — Canva and AI writing aids.

Not too long ago, Photoshop was the go-to for anyone wanting to design anything, but it required a steep learning curve and considerable time investment.

Fast forward to the present, and Canva has become the darling of content marketers, bloggers, educators, and virtually anyone looking to create eye-catching graphics without a degree in design.

On the writing front, AI tools have emerged as a potent ally for non-native English speakers, those less confident in their writing skills, or anyone pressed for time but in need of quality content.

Let me share my thoughts on how these two platforms have changed the game for me, offering empowerment and occasional ethical puzzles.

Canva – Design Democratized

Before Canva, the idea of crafting a professional-looking poster would send me into a mild panic.

My design skills were, let’s say, rudimentary at best.

Canva changed all that.

Intuitive and Inviting

Right off the bat, Canva felt welcoming.

Its drag-and-drop functionality, vast array of templates, and simple interface allowed me to create something visually appealing without prior design knowledge.

Suddenly, I was designing posters, social media graphics, and even branding materials with a newfound sense of confidence and creativity.

Creativity Unleashed

What’s remarkable about Canva is how it levels the playing field.

You don’t need to be an artist or a designer to create beautiful visuals.

The platform provides the tools and inspiration, allowing your ideas to take center stage.

It’s liberating, enabling me to communicate visually in ways I never thought possible.

AI Writing Tools: The Power of Language Unlocked

Moving on to AI writing aids, these tools have been nothing short of revolutionary for me, particularly as someone who doesn’t always easily find the right words.

Enhancing Expression

The AI doesn’t write for me; it writes with me.

I start with my thoughts, however chaotic, and the AI helps me refine them, suggesting better phrasing, correcting grammar, or even proposing new directions for my narrative.

It’s like having a collaborative partner who’s always ready to assist, guide, and improve.

Ethical Considerations

However, with great power comes great responsibility.

Using AI to enhance writing raises questions about authenticity and originality.

Where do my ideas end and the AI’s begin? I’ve come to view it as a tool, much like a thesaurus or an editor.

It enhances my voice without replacing it, ensuring my content remains genuinely mine.

Canva vs. AI: A Comparative Glance

While both Canva and AI writing tools have been game-changers, they serve different facets of content creation.

Here’s how they stack up against each other, in my experience.

User Empowerment

Canva empowers visually, breaking down complex design processes into simple, manageable steps.

AI writing tools empower linguistically, helping articulate thoughts more clearly and persuasively.

Both tools democratize their respective domains but do so in different realms — visual vs. verbal.

Creative Process

With Canva, I’m often inspired by the templates and design elements available, which spark ideas or take my projects in directions I hadn’t anticipated.

In contrast, AI writing feels more linear. It starts with my input and refines it toward a polished outcome.

Canva engages my visual creativity, while AI challenges and enhances my verbal expression.

Learning Curve

Canva’s learning curve is relatively flat, especially for basic tasks.

It’s about selecting, dragging, dropping, and tweaking.

On the other hand, AI writing tools require a deeper understanding of how to interact effectively with the technology, crafting prompts that elicit the best possible output.

Outcome Satisfaction

With Canva, I see the results instantly.

I know when a design clicks because it resonates visually.

With AI writing, the satisfaction is more nuanced, often requiring several iterations to feel “just right.

It’s a more internal, reflective process, where I assess whether the words truly capture what I’m trying to convey.

Embracing the Tools of Today for Tomorrow’s Creations

Ultimately, both Canva and AI writing tools have transformed how I create content.

Canva has given me a visual voice, enabling me to design with confidence and flair.

AI writing aids have sharpened my words, ensuring my message is conveyed clearly and effectively.

While these tools provide incredible support, they also underscore the importance of maintaining a personal touch and authenticity in our creations.

Whether it’s a poster or a piece of prose, the essence of our individuality should always shine through.

Adapting to new technologies can be daunting, but my journey with Canva and AI writing tools has taught me the value of embracing change.

In a world where digital tools are increasingly integral to our creative expressions, being open to learning and adapting is key.

These technologies have the power to transform ordinary ideas into extraordinary creations, and I’m excited to be part of this transformative journey.


You can read this book “Canva: Professional Tips and Tricks When You Design with Canva (Step by Step Canva Guide for Work or Business with Pictures)” to design better pictures.

My perspectives about AI:

View at Medium.comView at Medium.comView at Medium.com


View at Medium.com

AI in Writing – Cheating or Just Another Tool in the Toolbox?

Navigating the Nuances

AI’s Role in Polishing My Prose

Photo by Anita Austvika on Unsplash

AI and the Art of Writing: A Non-Native Speaker’s Perspective.

The integration of artificial intelligence (AI) in our daily lives is a debate that has only grown more intense with time.

From driving cars to predicting weather, AI’s reach seems limitless.

But when it comes to the sacred act of writing, should we lean on AI for assistance, or is that crossing a sacred line of human creativity?

As someone who’s grappling with these questions personally, I’ve got some thoughts to share.

First off, let me be clear — I align with many writers who believe that using AI to write an entire article with just a prompt isn’t the way to go.

The essence of writing, for me, involves a deep personal engagement with the topic, a process that shapes thoughts into words in a way that’s inherently human.

To simply feed a prompt to an AI and let it churn out content feels like a shortcut that bypasses the soul of writing.

But here’s where my perspective might diverge from some purists — I use AI to assist in my writing.

Now, before you raise your eyebrows, hear me out.

English isn’t my first language, and I’m learning to master it.

This journey is filled with nuances, idioms, and syntactical structures that are sometimes baffling.

In my quest to express myself clearly and fluently, I’ve found AI to be a valuable ally.

When I write, the initial draft brims with my thoughts, raw and unfiltered.

It’s me at my most genuine, but it’s also me at my most vulnerable, grappling with a language that isn’t my mother tongue.

After pouring my thoughts onto paper, I turn to AI, not to rewrite my content but to refine it.

The essence, the core ideas, and the originality – all remain untouched, sourced from the wellspring of my creativity.

Now, you might wonder, is that wrong?

Is it cheating, or is it simply using available tools to bridge a linguistic gap?

To me, writing is an expression of the self, an art form that is as personal as it is universal.

When I use AI, it’s a bit like a non-native painter using a more advanced brush to bring their visions to life.

The painting’s concept, its emotion, and its message remain the artist’s own.

The brush is merely a tool to help realize that vision more vividly.

However, I recognize the other side of the argument.

Writing should maintain a human touch, they say.

It should reflect the imperfections, the quirks, and the unique style of its creator.

By involving AI, do we risk sanitizing writing to the point where it loses its individuality, its human essence?

I ponder this, especially when I see AI-written pieces that lack warmth or personal insight.

Yet, I believe the answer isn’t a blanket rejection of AI assistance but a balanced approach.

AI can help polish grammar, enhance clarity, and even suggest ways to make the prose more engaging.

But it should not – and cannot – replace the human experience, the personal stories, and the authentic voice that make writing resonate.

Consider this: every tool, from the quill pen to the word processor, has influenced how we write.

Yet, at their core, these are just tools – they don’t create art on their own.

Similarly, AI is a tool, albeit a sophisticated one.

It can assist, but the spark of creation? That remains distinctly human.

So, to those who argue that AI has no place in writing, I offer this perspective: AI is not a replacement but an enhancement, a means to bridge gaps and elevate expression.

For non-native speakers like me, it’s a way to communicate more effectively, ensuring our voices are heard and our messages understood.

In the end, isn’t the goal of writing to connect, to convey, to communicate?

If AI can help us do that more clearly, more powerfully, then perhaps it’s not the enemy but an ally.

After all, the essence of writing – the ideas, the passion, the message – will always come from the human heart and mind.

Let’s not shun AI in writing outright.

Instead, let’s find ways to use it responsibly, ensuring that it enhances rather than eclipses our human touch.

By doing so, we honor both the tradition of writing and the potential of technology, weaving them together in a way that enriches our expression and our understanding of each other.

Question to ponder — Is using AI … cheating or just another tool in the toolbox?