The Crucial Role of Inclusiveness in AI

AI ETHICS

Principle of AI — Inclusiveness

Image create using ChatGPT

I was fascinated by AI’s potential. It seemed like the future, with endless possibilities to revolutionize healthcare, education, and legal systems.

But one thought kept nagging at me: Who benefits from this technology? It dawned on me that if AI only serves a select group, it could widen existing social inequalities. If AI is only built for those with the most access, are we moving forward?

My work with the Internet of Things (IoT) and smart cities has already shown me how technology, while promising to enhance urban living, often caters to those with the resources to use it.

That same realization hit me with AI: AI must be inclusive.

It has to serve everyone, especially the vulnerable, or we risk creating deeper societal divisions. This is why AI must align with the principles of our Federal Constitution, which emphasizes equality, justice, and fairness for all.

Building Inclusiveness into AI Development

The first step to creating inclusive AI is ensuring the systems are designed for everyone, not just the privileged few.

I remember discussing this with a colleague. I asked, “What happens when AI systems in healthcare only use data from urban hospitals that serve wealthier patients?”

We both knew the answer.

Those systems wouldn’t be effective in rural areas, where diseases manifest differently and healthcare resources are more limited.

This example stayed with me. Imagine an AI designed to detect skin cancer, I thought. If it’s only trained on images of light-skinned individuals, what happens when it’s used on darker-skinned patients?

The answer is obvious: it could misdiagnose or fail to identify the condition entirely. Such bias has serious consequences—it could lead to poorer healthcare outcomes for large sections of the population.

That’s why AI systems need diverse data. We can ensure that AI serves everyone equally by training models on datasets that include various skin tones, environments, and lifestyles.

I remember thinking, This is more than just good design — it’s about justice. AI has to reflect the diversity of the people it’s meant to serve, or we’re not living up to our national values of fairness and equality.

Addressing the Needs of Vulnerable Groups

Then, there’s the issue of how AI tools can meet the specific needs of vulnerable populations.

AI should be for more than just those who live in developed, well-connected areas or who can afford the latest technology. It must serve everyone, especially those in need.

One day, I was thinking about the legal system and how difficult it is for many people to get proper legal representation.

I thought, “What if an AI could provide essential legal advice to those who can’t afford a lawyer?” This idea felt like a breakthrough. AI could help people understand their legal rights, assist in drafting contracts, or even generate legal documents.

But then another thought came to mind: What about people who struggle with reading? Or those without reliable internet access?

For AI to be inclusive, it must account for these users.

I imagined an AI legal assistant offering voice guidance for people with lower literacy levels or an AI working offline to reach remote areas. It became clear to me that AI could be the key to equal access to justice—but only if it’s designed to include everyone.

This aligns perfectly with our national principles of fairness and equality.

Ensuring Diversity Among AI Developers

As much as inclusiveness is about the technology itself, it’s also about who is building it. A diverse team of developers brings different perspectives, helping identify and address biases early on.

Are the people building this AI as diverse as those it serves?

Education is a perfect example of how a lack of diversity in AI development can lead to unintended consequences.

I once discussed AI-powered systems for grading student essays. I wondered, “What if the AI is biased towards a specific cultural or linguistic group?

Imagine a system that unintentionally favors students from urban areas who are more familiar with specific cultural references. Students from rural or minority backgrounds could be unfairly marked down simply because the AI doesn’t understand their context.

That’s where a diverse team of developers comes in.

They would bring a broader range of experiences and insights, helping to design AI systems that are fairer and more inclusive.

I pictured a scenario where developers from various backgrounds are involved in creating an AI-powered educational tool. A diverse team would recognize that not all students have the same internet access, so they design the system to work offline or in low-bandwidth environments.

That’s how AI can truly level the playing field for students, I thought. It’s about giving every student an equal chance, no matter where they come from.

Moving Forward with Inclusive AI

As I reflect on the future of AI, one thing becomes clear to me: Inclusiveness is not a choice; it’s a necessity.

If we’re not careful, AI could widen the gaps we want it to close.

That’s why we need to ensure that AI development techniques are inclusive, tools are designed to meet the needs of vulnerable groups, and that teams behind these systems are as diverse as the society they serve.

In my work with IoT and smart cities, I’ve always aimed to make technology accessible to as many people as possible.

The same approach must be taken with AI.

By focusing on inclusiveness, we can ensure AI systems benefit everyone, which aligns with our Federal Constitution and National Principles. This isn’t just about technology; it’s about creating a fairer, more just world.

In the end, I realized that inclusiveness in AI isn’t a luxury—it’s essential.

If we don’t take inclusiveness seriously, we risk creating technology that serves only the privileged and leaves the rest behind.

And that’s not the future I want to build.

The Importance of Transparency in AI

AI ETHICS

Building Trust in AI Software

Image created using ChatGPT

I was fascinated by AI’s power to automate complex tasks, solve problems, and even make decisions that typically require human judgment.

But as I dug deeper into the mechanics of AI, one question kept coming to my mind: How do we ensure that AI is doing what it’s supposed to do?

More importantly, how do we ensure everyone affected by AI decisions understands what’s happening behind the scenes? That’s where the principle of transparency comes into play.

Transparency in AI isn’t just about ensuring the technical aspects are visible to a select group of developers or engineers. It’s about ensuring that the processes and decisions made by AI systems can be explained to all stakeholders — whether they’re technical experts, end users, or decision-makers.

AI must not be a “black box” where decisions are made, but no one understands how or why.

This idea of transparency is essential when AI makes decisions that impact people’s lives. Whether deciding who gets a loan, determining the outcome of a legal case, or even influencing hiring decisions, transparency allows stakeholders to evaluate the risks and address any issues.

Full Disclosure: When AI is Making Decisions

One key aspect of transparency is being upfront when AI is involved in decision-making.

Let’s consider a scenario in the hiring process.

Imagine applying for a job, going through an interview, and later finding out that the final decision on whether you were hired was made by an AI system instead of a human.

I often think about this: Wouldn’t it be frustrating if you didn’t know an AI was involved? That’s why it’s so crucial for companies and organizations to disclose when AI systems are being used in decision-making processes.

People have a right to know if an algorithm is influencing the decisions that affect their lives.

It’s not just a matter of ethics — it’s about trust.

Let’s say a company uses an AI system to screen job applicants. Full disclosure would mean informing applicants upfront that an AI tool is part of the selection process, explaining how it works, and outlining what data it considers.

With this transparency, candidates may feel confident in the outcome, especially if rejected without explanation.

Transparency gives people the opportunity to understand and even challenge decisions if needed.

The Purpose Behind the AI System

Another critical element of transparency is ensuring the AI system’s purpose is clear.

Take, for example, a facial recognition system used in security.

How many people understand the full extent of facial recognition’s purpose? Is it merely for security, or is it also used to track individuals for marketing purposes?

Stakeholders should always be aware of the purpose of the AI systems they interact with. For example, suppose a facial recognition system is used at an airport for security purposes. In that case, passengers must know precisely what the system is doing, what kind of data is being collected, and how it’s being used.

Without this clarity, there’s a risk of misuse or mistrust.

One real-world example is when social media platforms use AI to filter content.

If users are unaware that AI systems are screening and categorizing their posts, they might need to understand why specific posts are taken down or flagged. This lack of transparency can create confusion, making people feel their rights are being violated.

Understanding the Data: Bias and Quality Control

Whenever I think about AI transparency, the issue of training data comes to mind.

AI systems are only as good as the data they’re trained on, but often, the data contains biases that reflect historical or social inequalities. The data must be used in training AI to be disclosed and scrutinized to ensure fairness.

Take the example of AI systems used in the legal system.

Imagine an AI tool designed to predict the likelihood of someone reoffending after being released from prison. If the data used to train the AI is biased — perhaps it overrepresents specific communities — it could lead to unfair outcomes.

What if the AI system was unknowingly biased against a specific demographic? These biases could go unchecked without transparency about the training data, perpetuating discrimination.

In my view, transparency in AI isn’t just about disclosing that AI is being used — it’s also about being open about the data and processes behind it. Stakeholders need to know what historical and social biases might exist in the data, what procedures were used to ensure data quality, and how the AI system was maintained and assessed.

Maintaining and Assessing AI Systems

An often overlooked but equally important aspect of AI transparency is how these systems are maintained over time.

Just because an AI model works well today doesn’t mean it will work as expected tomorrow. What if the data changes or the system starts to degrade over time?

I always think of this in the context of healthcare. Imagine an AI system used to assist doctors in diagnosing patients. The system was trained on medical data several years ago, but medical knowledge and treatments have evolved rapidly. The AI could become updated with regular updates and assessments, leading to accurate diagnoses.

Transparency means informing users about how the AI system works now and keeping them updated on how it’s maintained and monitored over time. This ensures that AI systems remain effective and fair.

The Right to Challenge AI Decisions

Finally, AI transparency must include people’s ability to challenge decisions made by AI systems.

This is crucial for building trust.

If someone feels an AI system has unfairly treated them—say, by denying them a loan or flagging them incorrectly by a security system—they should have the right to question and appeal the decision.

I often ask myself, How would I feel if an AI decided for me, and I had no way to contest it? This is where transparency plays a pivotal role.

It’s not enough for people to know an AI system made a decision — they also need to know how to challenge that decision.

Transparency ensures that AI systems are accountable through a human review process or by providing clear channels for appeals.

Moving Forward with Transparent AI

It’s clear that transparency is not a luxury—it’s a necessity.

Without it, AI systems risk becoming tools people don’t trust or understand. AI must be practical and transparent in its processes, decisions, and data usage to succeed.

Transparency principles — whether they involve disclosing AI’s role in decision-making, clarifying its intended purpose, or allowing for challenges — are essential to building trust in AI systems.

This is the only way to ensure AI systems benefit everyone fairly and responsibly.

Protecting Your Future: Why AI Security and Privacy Matter

ABOUT ARTIFICIAL INTELLIGENCE (AI)

Security and Privacy — Principle of AI

When we talk about artificial intelligence (AI), one of the most important things to remember is that AI must be private and secure. It’s like driving a car.

You want the car to function properly, keep you safe, and always be in your control.

AI is no different.

These systems must perform as intended and resist tampering, especially by unauthorized parties.

In my experience working with IoT and smart cities, I have seen the risks and benefits of AI, and developers need to ensure that safety and security are built into every system from the beginning.

Let me explain with some simple examples.

Example 1: Self-driving Cars

One of the most exciting advancements in AI is the development of self-driving cars. Imagine a vehicle designed to drive itself from point A to point B.

The promise of these cars is enticing: fewer accidents, no need for human intervention, and efficient traffic management.

But what happens if the AI controlling the car is hacked? What if an unauthorized party can take control and steer the vehicle into danger?

This is where safety and reliability come into play.

The AI system must be designed to resist such interference. Developers must ensure that only authorized individuals can interact with the AI’s decision-making process.

If someone tries to hack into the system, the AI must be able to detect and prevent the intrusion. Without this security, the risk of accidents increases dramatically, and people may lose trust in AI technology.

In my experience with IoT and smart city solutions, we must design systems with these safeguards from the ground up.

AI systems should be tested rigorously under various scenarios to ensure they perform as intended, even in unexpected conditions.

For instance, just as we ensure an IoT device in a smart city responds safely during a power outage, a self-driving car should still behave responsibly if something goes wrong.

Example 2: AI-powered Healthcare Diagnostics

Another powerful application of AI is in healthcare.

AI systems are now being used to assist doctors in diagnosing diseases based on medical images or patient data. Consider how an AI system can analyze thousands of medical scans in seconds to identify potential problems like tumors or heart conditions.

But what if the AI system gives a wrong diagnosis? Or what if someone manipulates the data to favor certain patients while discriminating against others?

Here’s where privacy and data protection become crucial.

Developers must obtain consent before using someone’s personal health data to develop or run an AI system. Patients must know how their data is being used and should have the right to control it.

Data collected for these purposes should never be used to discriminate against patients based on race, gender, or other factors.

Incorporating security-by-design and privacy-by-design principles ensures that data is protected from misuse throughout the AI system’s entire lifecycle.

Developers should also adhere to international data protection standards so patients can trust that their health data is safe and won’t be used unlawfully. As someone who has worked with data from IoT systems, I know how easily personal data can be misused if not handled carefully.

Example 3: AI in Smart Home Devices

Now, let’s look at something more straightforward: smart home devices. Many people use AI-powered gadgets in their homes, like smart thermostats, voice-activated assistants, or security cameras.

These devices collect a lot of personal data.

Imagine if someone could access your security camera without your permission or your voice assistant recorded your conversations and shared them with companies you don’t know about.

Developers of these AI systems must obtain user consent before collecting and using this data. And once the data is collected, it must be protected.

The system should guarantee privacy, meaning the information stays confidential and cannot be accessed by unauthorized parties.

Moreover, the system must be transparent about how the data is used so that users can make informed decisions.

I often tell people that IoT and AI systems are like locks on a door. You wouldn’t leave your front door unlocked for anyone to walk in, right? In the same way, AI systems must lock down data and make sure only the right people have access.

A secure and privacy-conscious design helps build trust with users, which is essential for the widespread adoption of AI technologies.

Final Thoughts

For AI to truly succeed and be embraced by the masses, it must be trustworthy.

We can’t ignore the risks associated with it, but we can mitigate those risks by focusing on safety, security, and privacy. AI systems need to be reliable, and developers should always aim to meet the highest standards in protecting users’ data.

When AI is safe, secure, and controllable, we all stand to benefit from its incredible potential.

In every project I’ve been involved in, from IoT solutions to smart cities, this principle has been at the forefront: build systems that people can trust.

Only then can we realize AI’s full potential in transforming industries, healthcare, and our daily lives.

Understanding Fairness in AI

UNDERSTANDING ARTIFICIAL INTELLIGENCE

How to build trust in AI machines and software.

Image created using ChatGPT

As I explore Artificial Intelligence (AI), one principle resurfaces in almost every conversation: fairness. But what does fairness mean when we talk about AI?

AI systems should be designed and implemented to avoid bias and discrimination. It sounds simple, but the more I think about it, the more complex it becomes. How can we ensure that a machine, learning from data that may contain past biases, remains fair to everyone?

I’ve spent years working in technology, from telecommunications to IoT, and I’ve seen firsthand how tech can change lives.

But what happens when this powerful technology, which is supposed to serve everyone, starts favoring particular groups? That’s the real issue with biased AI. Unfortunately, it’s not just a hypothetical concern—it’s happening all around us.

Is AI fair?” I often ask myself. And the answer, unfortunately, isn’t always “yes.

Example 1: The Recruitment Algorithm

Let me start with an easy-to-grasp scenario. Imagine a company using AI to screen job applicants.

The goal is simple: the AI looks at resumes and selects the best candidates for the job.

It sounds efficient.

But what if the historical data fed into the system reflects past biases? What if, historically, the company has hired more men than women for tech roles?

The AI would begin to learn from this data, thinking that men are more likely to succeed in these roles. The result? The AI starts favoring male candidates, even if female candidates are equally or more qualified.

As I think about this, I realize the real danger isn’t just the immediate bias — it’s the fact that it can perpetuate and amplify over time.

What if this AI system continues being used for years?” I ponder. “How many qualified candidates will be unfairly rejected just because the AI absorbed a biased pattern from the past?

This is why fairness is critical in AI systems.

We need to ensure that the algorithms don’t just mimic the past but actively help us create a more equitable future.

Example 2: AI in Healthcare

Another troubling example is in healthcare.

Imagine an AI system that helps doctors decide who should receive life-saving treatment first. Ideally, it should be a neutral tool that analyzes medical data to determine who is in the most critical condition.

But what if the AI has been trained on data favoring one demographic over another, such as wealthier patients who typically have better access to healthcare?

The AI might then start recommending treatments to wealthier individuals while overlooking those from underprivileged backgrounds who may have just as critical a need.

How can we let this happen in healthcare?” I ask myself. The stakes are too high. It’s a matter of life and death, and if we can’t ensure fairness in these systems, we are failing those who need help the most.

This is why AI fairness isn’t just a technical issue — it’s a moral one.

We’re dealing with real people’s lives, and any bias, no matter how small, can have far-reaching consequences.

Example 3: Facial Recognition and Law Enforcement

Facial recognition technology is another area where fairness is crucial. Several studies have shown that facial recognition systems often struggle to identify people with darker skin tones accurately.

How is this possible?” I ask myself. With all our advancements, how can a system still make such glaring errors?

But then I realized—it all comes back to the data. If the AI were trained primarily on images of lighter-skinned individuals, identifying darker-skinned people would be less accurate. If law enforcement agencies rely on these systems, it can lead to unjust outcomes, such as wrongful arrests or misidentification.

Imagine being misidentified by an AI system just because it wasn’t trained properly,” I think.

The impact of such a failure is profound.

People’s lives can be turned upside down instantly, all because an algorithm wasn’t built with fairness in mind.

The Path Forward

So, how do we ensure fairness in AI?

It starts with the data. We need diverse and representative datasets to train these systems. But it also requires constant vigilance. Even with the best data, biases can creep in through the design or implementation of the AI system itself.

I often remind myself, “It’s not enough to trust that AI will ‘figure it out’ on its own. As developers and users, we have to be proactive in identifying and correcting biases.” It’s a responsibility that we must take seriously, especially as AI becomes more integrated into every aspect of our lives.

For me, fairness in AI is about ensuring that the technology we build serves everyone equally.

It’s about not allowing past biases to shape the future.

It’s about holding ourselves accountable to the highest ethical standards. Only then can we truly unlock AI’s potential in a way that benefits all of humanity.

What it Takes to Build a National AI Centre

This wasn’t just about building a center. It was about building Malaysia’s future.

It all started with a question. “Dr. Mazlan, do you think Malaysia needs a national AI center?

At first, I paused. It was a question I had been grappling with for some time, but hearing it from others made me realize just how urgent the conversation had become. Artificial Intelligence (AI) isn’t just a buzzword anymore; it’s a transformative technology already reshaping industries worldwide. And if we don’t act now, we risk being left behind.

The first time I was asked this question, I remember sitting at a roundtable discussion with some of Malaysia’s top tech leaders. I could feel the weight of the moment. This wasn’t just an academic debate but a call to action.

Yes,” I replied firmly. “We need a national AI center.

But the follow-up questions came quickly. “What does it take to build such a center? How do we ensure its success? What infrastructure do we need? And what about talent? Can Malaysia really compete on a global stage?

I found myself reflecting on my experience building Favoriot. There were striking similarities between the early challenges we faced with IoT and the new hurdles with AI. In both cases, it wasn’t just about the technology. It was about creating an ecosystem where innovation could thrive, talent could flourish, and industries could benefit.

Setting up a national AI center is the same. It’s about creating the right conditions for AI to impact meaningfully across sectors.

The Infrastructure Dilemma

Everyone seems to ask the first question: What infrastructure does a national AI center need?

It’s a fair question I’ve spent much time pondering. From my experience with Favoriot, I learned that infrastructure is the foundation upon which everything else is built. Without suitable systems, you’re doomed to fail before you even begin.

For AI, this means investing heavily in computational power. You can’t have AI without data, and you can’t process that data without high-performance computing. But it’s not just about raw computing power. We must consider the entire data pipeline — from storage and processing to analysis and action.

As I was explaining this to a colleague recently, I compared our early days at Favoriot. “Remember when we first started building our IoT platform?” I asked. “We underestimated how much data we’d need to handle, and we were constantly upgrading our servers. AI will be like that but on a much larger scale.

We’ll need data centers that can scale to handle current demand and future growth. The cloud will be a critical part of this, as will edge computing, particularly for real-time applications. And then there’s the question of connectivity. Malaysia’s digital infrastructure is improving, but there’s still work to be done. We’ll need 5G to ensure the high-speed, low-latency networks that AI applications depend on.

I remember thinking about the logistics of all this. “Where do we even start?” I asked myself. “How do we ensure the infrastructure we build today isn’t obsolete tomorrow?

It’s a daunting challenge but not an impossible one. With the right partnerships — local telcos and international tech companies — we can build the infrastructure an AI center needs to thrive.

Talent: The Heart of AI

As crucial as infrastructure is, it’s not the only thing that matters. The next big question is talent.

Do we have enough AI talent in Malaysia?” someone asked me recently.

I paused. “Not yet,” I admitted. “But we can get there.

Talent will be the most critical factor in determining whether or not a national AI centre succeeds. We need data scientists, machine learning engineers, AI researchers, and a host of other specialists who understand the nuances of AI.

I’ve seen this firsthand at Favoriot. Finding people who understood IoT early on was challenging, and AI is no different. We’re not just competing with local companies for this talent; we’re competing globally. Countries like the US, China, and South Korea are pouring resources into developing their AI talent pools.

But here’s where I’m optimistic. Malaysia has a young, tech-savvy population, and our universities are producing brilliant engineers and data scientists.

What we need is to create pathways for them to specialize in AI.

I remember discussing this with a professor recently. “We need to embed AI into the curriculum at all levels of education,” I said, “ from secondary schools to universities. AI can’t be a niche subject — it must be a core part of our education system.

But education alone isn’t enough.

We need to create opportunities for this talent to grow. That means internships, apprenticeships, and partnerships with the private sector. The National AI Center could act as a hub, connecting students and researchers with industry and giving them real-world problems to solve.

Imagine a place,” I told a colleague, “where students, startups, and multinational companies are all working together, learning from each other, and pushing the boundaries of what AI can do. That’s what the national AI center could be.

Collaboration: The Key to Success

This brings me to the next big question: how do we foster stakeholder collaboration?

This is where the real challenge lies. My experience at Favoriot taught me that collaboration isn’t always easy. There are so many different interests at play — government, industry, academia — and getting everyone on the same page can be challenging. But it’s essential.

Someone recently asked me, “Why do we need a national AI center?” “Why not let the private sector handle AI development?

It’s a valid question and one that I’ve heard many times.

The answer lies in AI itself. AI isn’t just another technology; it’s a general-purpose technology that will impact every sector, from healthcare and education to finance and agriculture. No single entity can build an AI ecosystem independently; it requires collaboration.

The National AI Center would be a place where different stakeholders come together. The government could set policies and regulations that ensure AI is developed and used ethically. Universities could focus on research and training. Startups could experiment with new AI applications, and large corporations could scale those innovations.

Think about it,” I told a friend recently. “If we can bring together the best minds from government, academia, and industry, we can create something truly special — a place where innovation happens at the intersection of different perspectives.”

The Benefits for Industry and Startups

One of the most exciting aspects of setting up a national AI center is the potential benefits for industry and startups.

When I first started Favoriot, I envisioned how IoT could transform industries in Malaysia. And while it took time, we now see that vision come to life. AI is poised to have a similar, if not more significant, impact.

The national AI center could provide a platform for established industries to experiment with new AI technologies without investing in expensive infrastructure. Imagine a manufacturing company collaborating with AI researchers to develop predictive maintenance algorithms or a healthcare provider working with data scientists to create personalized treatment plans using AI.

The possibilities are endless.

And for startups? The National AI Center could be a game-changer. Startups often have brilliant ideas but need more resources to bring those ideas to life. The AI center could provide them with the computational power, data, and expertise they need to scale their innovations.

I’ve seen how difficult it can be for startups to break into traditionally slow industries to adopt new technologies. However, with the support of a national AI center, those barriers could be lowered. Startups could test their ideas, get feedback from industry leaders, and scale their solutions faster.

I remember talking to a startup founder recently who was working on an AI-powered solution for agriculture. “We have the technology,” he told me, “but we need access to data and the right partners to scale.”

That’s where the National AI Centre comes in. It would act as a bridge, connecting startups with the data, infrastructure, and partnerships they need to succeed.

A Vision for the Future

As I sit here, reflecting on these conversations, I can’t help but feel a sense of urgency. The world is moving quickly, and AI will be at the heart of that change. Malaysia has the potential to lead, but only if we act now.

Can we do this?” I asked myself one evening as I sketched out ideas for the center. The answer is yes. However, it will require a concerted effort from government, industry, academia, and startups.

Setting up a national AI center is a bold vision, but it can transform Malaysia into a leader in AI innovation. With the proper infrastructure, talent, and collaborations, we can create an AI ecosystem that benefits everyone — industries, startups, and the nation.

When we look back in a few years, I believe we’ll see that this wasn’t just about building a center. It was about building Malaysia’s future.

The Sins of Writing Using AI (ChatGPT) — Can You Forgive Me?

There are times I get the help of AI. Is that a big problem?

Image created using Canva by Author

I was sitting in front of my computer screen, scrolling through yet another article post, hammering in the point that using AI or ChatGPT for our writing is a big no-no.

The frustration was real.

My mind was swirling with thoughts, “Really? Not even a little help?

It felt like being handed a paintbrush but told not to use any colors.

Here’s the deal — I get it, okay?

The idea of just tossing a simple prompt into the AI abyss and letting it spit out a full-blown article or book without a second thought doesn’t sit right with me either.

That’s not creativity; that’s just being lazy.

But here’s where my frustration bubbles up like a volcano on the verge of eruption: English isn’t my first language. Getting those perfect, grammatically flawless sentences down?

It’s like trying to navigate a maze blindfolded.

Hours, I tell you, hours spent wrestling with a 500 or 1000-word article, trying to make it sound just right.

And it’s not just about grammar.

Conveying what I want to say, filling my articles with the right emotions, the nuances, the color — it’s daunting.

So, here’s the forbidden thought: what if I draft my thoughts, outline my content, and then, just then, let AI or ChatGPT fine-tune it?

Is that really so wrong? A sin, even?

But what if the article turns out better? What if it actually gets read, appreciated, or — dare I say — earns something on Medium? Wouldn’t that be a good thing?

The thing is, some of my articles that had a touch of AI assistance received good responses and even earned a bit. Meanwhile, the ones I slaved over, pouring my heart and soul into every word without any AI help, barely made a ripple.

It’s enough to make you wonder, isn’t it?

Why is it that the ones I work hardest on, the ones I pour my everything into, just… flop?” I’d ask myself, staring at the screen, waiting for an answer that never comes.

And yet, there’s this lingering fear — what if a Medium publication outright rejects my work because it got a helping hand from AI?

Even if I’m the one steering the ship, drafting the content, and ensuring it reflects my voice, my ideas?

So here’s the confession, the moment of truth at the end of this tale of frustration and inner turmoil: this article, the very one you’re reading now, had a bit of help from ChatGPT.

Yeah, I said it.

With its assistance, I managed to shape my thoughts, polish my words, and hopefully, make this piece engaging and coherent.

Can you forgive me?

In a world where the line between human and machine creativity is becoming ever so blurred, maybe it’s not about who writes the article but the story it tells, the emotions it evokes, and the impact it has.

If using AI means breaking down language barriers and bringing our stories to life in ways we never could on our own, perhaps it’s not such a sin after all.

View at Medium.comView at Medium.comView at Medium.com

I Asked ChatGPT – What Were the Reasons Behind Sam Altman’s Dismissal from OpenAI?

Photo by Andrew Neel on Unsplash

It started with a simple post by Sam Altman on X.

The world was in shocked.

Because the news was quite sudden without any signs.

What really happened?

I think the best answer is to get ChatGPT to answer by itself.

Here’s what ChatGPT’s answer.

In a surprising turn of events today, Sam Altman was recently dismissed from his role as CEO of OpenAI.

The board’s decision was driven by what they perceived as inconsistencies in Altman’s communications, impacting their ability to manage effectively.

Board’s Decision and Altman’s Departure:

The board of OpenAI concluded that Altman lacked consistent transparency in his dealings with them.

This led to a loss of confidence in his leadership, prompting a thorough review and his subsequent firing.

Altman, reflecting on his departure, expressed affection for his tenure at OpenAI and hinted at future endeavors.

Impact and Reaction:

Altman’s dismissal came unexpectedly, considering his active involvement in OpenAI’s strategic direction and regulatory engagement.

Known for his interactions with world leaders and his role in key conferences, the specifics of Altman’s alleged missteps remain undisclosed.

Photo by Jonathan Kemper on Unsplash

Further Developments:

Greg Brockman, a co-founder of OpenAI and a close associate of Altman, resigned from his position as chairman in response to the board’s direction, although he remains with the company.

This decision underscores the significant impact of Altman’s departure on the organization.

Closing Remarks:

The reasons behind Altman’s firing are not entirely clear, with OpenAI and Microsoft, a major stakeholder, keeping details under wraps.

This discretion has led to ongoing speculation and questions within the tech community regarding the nature of Altman’s exit and the future direction of OpenAI.

Latest posts on X today by Sam Altman.

The Ultimate ChatGPT Cheat Sheet

Main points about ChatGPT:

  1. ChatGPT is an AI chatbot that has become one of the fastest-growing consumer apps in history, reaching 100 million users in just two months.
  2. It is based on OpenAI’s GPT-3 language model and is designed to understand and generate human-like responses to text prompts.
  3. ChatGPT has been trained on vast amounts of text data from the internet, including books, articles, websites, and social media, enabling it to generate responses that appear friendly and intelligent.
  4. While ChatGPT has gained popularity, it also has limitations. Its responses may sometimes be inaccurate, untruthful, or misleading, and OpenAI has cautioned against relying on it for essential matters.
  5. OpenAI has introduced a paid subscription plan called ChatGPT Plus, which offers benefits such as full access even during peak times, faster response times, and priority access to new features and improvements. However, ChatGPT is still available for free as well.

Here’s the short cut that you always wanted for ChatGPT:

Source: Max Rascher

[Source @shanefozard ]

[Source @hasantoxr]

[Source @hasantoxr ]
[Source: finxter ]
[Source @hasantoxr ]
[Source : Olivier ]
//www.instagram.com/embed.js

[Source: ArtificialCorner.com ]

[Source: Technosmart ]

The Ultimate AI Tutorial and Tips