When Data Leaves the Country, Control Leaves With It

A personal reflection on why I insisted on an Enterprise IoT Platform with on-premise deployment

There are moments in a founder’s life when you stop talking about features.

You stop talking about dashboards.
You stop talking about protocols.
You even stop talking about scale.

And you start talking about control.

This was one of those moments.

I remember sitting quietly after one of our internal briefings, staring at the whiteboard filled with arrows, boxes, and deployment diagrams. Everyone had left the room. The air was still.

Why am I pushing so hard for this Enterprise Plan?
Why does this feel heavier than just another pricing tier or feature release?

Then it hit me.

This was not about software.
This was about sovereignty.

And once you see it that way, you can never unsee it.

From owning a kitchen to owning the whole restaurant

For years, I used a simple analogy when explaining IoT platforms.

If you are on a shared cloud platform, you are like a chef renting a kitchen. You can cook. You can serve. But you do not own the space. You follow house rules. You live with limits.

The Enterprise Plan is different.

It is not about owning the kitchen anymore.
It is about owning the whole restaurant.

The building.
The keys.
The doors.
The data flows.
The servers are sitting quietly in your own premises.

When you own the restaurant, no one tells you when to close. No one caps how many customers you can serve. No one decides where your ingredients come from.

That is the mindset behind the Enterprise IoT Platform.

The moment I realised cloud is not always the answer

For a long time, cloud felt like the default answer to everything.

Fast.
Flexible.
Convenient.

I believed in it. I still do, for the right use cases.

But over the years, as I spoke to large organisations, city operators, government agencies, and critical infrastructure owners, a pattern kept repeating itself.

“We don’t want our data outside the country.”
“We need to know exactly where the servers are.”
“We cannot afford external dependencies for this system.”
“This data is too sensitive.”

At first, some people dismissed these concerns as paranoia.

I did not.

Because when you are dealing with traffic lights, water systems, energy grids, and public safety sensors, paranoia is just another word for responsibility.

What happens if the platform is outside the country and something goes wrong?
Who takes control when connectivity is lost?
Who answers when an entire city goes dark?

These are not theoretical questions. These are operational nightmares waiting to happen.

Data sovereignty is not a buzzword when infrastructure is involved

Data sovereignty sounds abstract until you put real consequences next to it.

Imagine a critical infrastructure monitoring system managed by a platform hosted overseas. One day, there is a major failure.

Power outage.
Network disruption.
Access blocked.

The local operators are standing there, staring at blank screens, unable to take control because the system that runs their infrastructure is not physically within reach.

That is unacceptable.

This is why on-premise deployment matters.

Not because it sounds serious.
Not because it looks impressive in a proposal.

But because control must stay with those who are accountable.

This thinking shaped every part of the Enterprise IoT Platform plan.

AI made the stakes even higher

If IoT data is sensitive, AI makes it explosive.

AI models learn from data.
Patterns.
Behaviours.
Weak points.

When AI touches critical infrastructure data, the question is no longer just “where is my data?”

It becomes “who understands my system better than I do?”

That was the turning point for me.

If AI is going to sit on top of IoT data, then the data must never leave the country.

This is not about fear.
This is about governance.

Every country I speak to says the same thing, whether it is Malaysia, Indonesia, the Middle East, or Europe.

“We want AI. But we want our data at home.”

The Enterprise Plan was designed to respect that reality.

Unlimited API is not a luxury; it is a survival

One detail that often gets overlooked is API limits.

People ask me, “Why unlimited API? Isn’t that excessive?”

Let me paint you a picture.

A manufacturing line monitors machines every second.
One sensor. One data point per second.
Multiply that by hundreds of machines.
Multiply again by shifts, days, months.

Suddenly, 500,000 API calls per day is not generous. It is restrictive.

The Developer Plan has limits because it should. It is built for builders, experimentation, and controlled scaling.

But enterprise environments do not experiment. They operate.

If you throttle data in an industrial environment, you are not saving costs. You are introducing blind spots.

Unlimited API is not about indulgence.
It is about continuous visibility.

Two very different enterprise realities

As I refined this plan, two clear deployment models kept emerging.

1. The white-label service provider model

Some organisations do not want to sell hardware.
They want to sell managed IoT services.

They do not want to build a platform from scratch. That path is expensive, slow, and painful.

So they white-label the Enterprise IoT Platform.

Their brand.
Their customers.
Their business logic.

They plug in their agricultural sensors, industrial devices, and vertical solutions, and run everything on their own enterprise platform.

Thousands of customers.
One controlled system.

I have seen how powerful this model can be when done right.

2. The smart city and government deployment model

Then there are cities.

Cities are different.

They already have many solutions. Parking. Flood sensors. Air quality. Lighting. Waste.

The problem is not a lack of data.
The problem is fragmentation.

Every system has its own dashboard. Its own vendor. Its own silo.

Local councils want a single platform, deployed on-premises, where everything comes together.

In some cases, councils cannot do this alone.

That is where state-level deployment makes sense.

One enterprise platform owned by the state.
Local councils connect their data.
Data stays within the country.
Visibility scales across regions.

It is pragmatic. It is cost-aware. It respects sovereignty.

This is bigger than one platform

As I reflect on this journey, I realise something.

The Enterprise IoT Platform is not just a product decision.
It is a philosophical stance.

It says:

You should own your data.
You should control your infrastructure.
You should not outsource accountability.

In a world rushing towards convenience, this is a reminder that responsibility still matters.

A quiet call to builders, cities, and leaders

If you are building systems that people depend on, ask yourself one simple question.

When things go wrong, who truly has control?

If the answer is unclear, it might be time to rethink how your platform is deployed.

I did.
And that rethink led us here.

I would love to hear your thoughts.
Where do you draw the line between convenience and control?
Share your reflections in the comments.

Favoriot: AI Agents Not Needed Now

Do Favoriot need to develop an AI Agent feature?

Short answer? No, Favoriot does not need full AI Agent automation right now.

And yes, what you have today is more than enough for the market you are serving.

Let me explain this the way I usually reason with myself.

I asked myself this quietly

“Do customers really want systems that act on their own…

or do they want systems they can trust?”

When I sit with city operators, facility managers, engineers, or even researchers, one thing keeps coming up.

They are not asking for autonomy.

They are asking for clarity.

They want fewer surprises.

They want earlier signals.

They want confidence before taking action.

That matters.

What Favoriot already does well

Right now, Favoriot Intelligence does something very important and very rare.

It learns patterns from real operational data

It surfaces what looks unusual

It feeds those insights into a Rule Engine

And then… it stops

That stopping point is not a weakness.

It is a design choice.

The system says,

“Here is what changed.

Here is why it matters.

You decide what to do next.”

That is precisely where trust is built.

Rule Engine + ML is not a compromise

Some people frame this as:

“Rule Engine now, AI Agents later.”

I don’t see it that way.

I see it as:

ML decides what deserves attention

Rules decide what action is allowed

This separation is powerful.

Why?

Because rules are:

  • Auditable
  • Explainable
  • Governable
  • Aligned with SOPs and regulations

And ML is:

  • Adaptive
  • Pattern-driven
  • Good at spotting drift and anomalies

Together, they form a human-in-the-loop intelligence system, not a black box.

That is exactly what enterprises and public sector teams are comfortable with today.

Do customers actually want AI Agents?

Here’s the uncomfortable truth.

Most organisations say they want AI to “automate everything”.

But when you ask one more question…

“Are you okay if the system shuts down equipment on its own?”

“Are you okay if it triggers evacuation automatically?”

“Are you okay if it changes operating parameters without approval?”

The room goes quiet.

What they really want is:

  • Earlier warnings
  • Better recommendations
  • Fewer false alarms
  • Less manual rule tuning

Favoriot Intelligence already delivers that.

Where AI Agents actually make sense later

I’m not against AI Agents. Not at all.

But their place is conditional, not universal.

AI Agents make sense when:

  • Policies are mature
  • Actions are reversible
  • Risk is low
  • Trust has been earned over time

For example:

  • Automated report generation
  • Recommendation ranking
  • Suggesting rule adjustments
  • Proposing actions for approval

Notice the word: suggesting, not executing.

That is a natural evolution path.

Not a starting point.

Strategically, Favoriot is in the right place.

By keeping:

  • ML for learning and insight
  • Rules for control and action

Favoriot positions itself as:

  • Reliable
  • Safe
  • Deployable today
  • Acceptable to conservative sectors

Smart cities.

Utilities.

Campuses.

Critical infrastructure.

These sectors do not reward “full autonomy” first.

They reward predictability and confidence.

My honest conclusion

If I had to answer this as simply as possible:

Favoriot does not need AI Agents to be valuable.

Favoriot Intelligence with ML-driven rules is already the right solution for today.

AI Agents can come later, carefully, selectively, and with guardrails.

Right now, Favoriot is doing something more important than automation.

It is helping people think earlier, not react later.

And that, in my book, is real intelligence.

Leading LLMs of August 2025: Who’s Winning the AI Race?

If AI progress felt like a sprint in 2023, by 2025, it looks more like a rocket launch. Models aren’t just improving year by year—they’re leaping ahead month by month. What we thought was “cutting edge” last quarter is already yesterday’s news.

Here’s the reality: the global LLM market is surging toward $105.5 billion in North America by 2030. That’s not a forecast—it’s a signal. AI is no longer a novelty; it’s infrastructure.

But with so many options, which models actually matter right now? Which ones are shaping the way businesses, developers, and researchers use AI today?

I’ve rounded up the 10 large language models making the most significant impact in August 2025. Each one has its own unique personality, strengths, and trade-offs.

1. OpenAI – GPT-5

ChatGPT 5 is the next step in OpenAI’s journey, moving beyond GPT-4.5’s strengths to deliver a model that feels sharper, more adaptive, and more transparent in its reasoning. Where GPT-4.5 leaned heavily on pattern recognition, ChatGPT 5 combines that fluency with stronger deliberate reasoning, giving it the ability to break down problems with more structure and clarity.

It is also built to integrate more smoothly into real workflows. From handling long-form context with greater accuracy to providing clearer explanations of its answers, ChatGPT 5 is less about simply generating text and more about acting as a reliable partner. The model handles multimodal input—text, images, audio, and video—with greater fluidity, making it useful across industries from education to enterprise automation.

Like its predecessor, ChatGPT 5 remains proprietary, available through subscriptions or enterprise licensing. But for teams that want both conversational polish and deeper reasoning ability in one package, ChatGPT 5 has quickly become the new reference point.

2. DeepSeek – The Open-Source Challenger

China’s DeepSeek R1 took the AI world by storm with 671B parameters in a Mixture-of-Experts setup. By May 2025, their DeepSeek-V3 was leading the open-source leaderboard, proving that open models can compete head-to-head with proprietary giants.

The magic? 30 times cheaper than OpenAI’s o1 and 5 times faster. It thrives in reasoning-heavy tasks like math, coding, and scientific simulations. And with RAG integration, enterprises can plug it into sensitive datasets while maintaining control.

If you want open-source power with enterprise-level results, DeepSeek is redefining the game.

3. Qwen – Alibaba’s Efficiency Master

Alibaba’s Qwen 3 family is quietly powering industries across Asia. Their standout, QwQ-32B, rivals GPT-4o and DeepSeek in reasoning and coding while requiring far less compute.

With 32K context windows, Apache 2.0 licensing, and a parameter range from 1.8B to 72B, Qwen has become one of the most accessible and widely adopted LLM ecosystems. Already, over 90,000 businesses use it for gaming, consumer electronics, and enterprise workflows.

Qwen proves you don’t need hyperscale resources to compete at the highest level.

4. Grok – Elon Musk’s Conversational Rebel

Built by xAI and integrated into the X platform, Grok 3 feels different. It’s witty, fast, and plugged into real-time information.

With Think, Big Brain, and DeepSearch modes, it breaks down problems and pulls fresh data directly from the web and social feeds. Trained with 10x the compute of Grok 2, it’s designed for speed and trend awareness.

If your world demands live analysis, news tracking, or instant customer interaction, Grok brings something truly unique.

5. Llama – Meta’s Open-Weight Titan

Meta’s Llama 4 arrived in April with two flagship versions: Scout and Maverick. Both are natively multimodal, handling text, images, and short video, and they boast 256K token context windows.

The openness of Llama remains its secret weapon. Businesses and researchers can run it on their own terms, tune it to specific workflows, and avoid vendor lock-in.

If freedom and flexibility matter most, Llama is the open-source heavyweight to trust.

6. Claude – Anthropic’s Reflective Thinker

Anthropic’s Claude 4 Sonnet is like the careful colleague who always double-checks their work. Its extended thinking mode allows the model to pause, reflect, and refine outputs before committing.

With a 200K-token context window, it handles long documents with ease, making it a natural fit for legal analysis, compliance-heavy industries, and coding projects that need extra accuracy.

If reliability is more important than speed, Claude delivers consistency and thoughtfulness.

7. Mistral – Small but Mighty

Sometimes you don’t need a massive model—you need one that’s fast and affordable. Enter Mistral Small 3.

With 24B parameters, Apache 2.0 licensing, and speeds up to 150 tokens per second, it’s optimised for low-latency applications. The kicker? You can run it on a single GPU or even a MacBook.

For startups and lean businesses, Mistral proves that small models can pack a punch.

8. Gemini – Google’s Reasoning Powerhouse

Google’s Gemini 2.5 is pushing boundaries with a 1M-token context window. That means it can process entire books or databases in one shot.

It’s multimodal, handling text, images, and code, and comes with self-fact-checking to reduce hallucinations.

It’s proprietary, so data compliance matters, but if you want enterprise-grade multimodality and serious reasoning, Gemini is one of the most advanced options on the market.

For those preferring open weights, Google’s Gemma 3 (1B–27B) brings much of the same reasoning strength in a lighter package.

9. Command R – Cohere’s Enterprise Specialist

Cohere isn’t trying to win the hype war—it’s focused on enterprise workflows. Their Command R+ offers 128K context windows, built-in citations, multilingual coverage, and retrieval-augmented generation.

It excels at policy manuals, compliance-heavy industries, and multilingual customer service. And for companies needing control, Command A is open-sourced at 111B parameters with 256K context support.

For enterprises where accuracy and compliance come first, Cohere is a trusted partner.

10. Falcon – The Middle Eastern Power Play

From the Technology Innovation Institute (TII) in Abu Dhabi, Falcon has emerged as one of the strongest open-weight LLMs outside the US, China, or Europe.

The latest version, Falcon 2, boasts multilingual capabilities, optimised efficiency, and open-access licensing. It’s trained on a diverse dataset with an emphasis on global inclusivity, making it particularly strong in Arabic and other underrepresented languages.

What makes Falcon stand out is its mission: bringing AI sovereignty to regions that often depend on Western or Chinese tech. By providing a robust open-source model, Falcon gives governments, universities, and enterprises across the Middle East a homegrown alternative.

If AI diversity and regional sovereignty are important to you, Falcon is an LLM worth watching closely.

Closing Thoughts

Ten models. Ten different approaches to the future of AI.

  • OpenAI and Gemini lead with polished, proprietary power.
  • DeepSeek, Qwen, Llama, and Falcon prove open-source can compete and even outpace.
  • Claude and Cohere focus on reliability and compliance.
  • Mistral and Grok carve out niches in speed, agility, and personality.

The bigger question isn’t “Which is the best?” but “Which one is the best fit for you?”

AI in 2025 is not a single path—it’s a crossroads with ten directions. And whichever road you choose, the destination is changing how we work, build, and think.

Now I’d love to hear from you. Which of these ten models do you think will dominate the AI race by 2030—and why? Share your thoughts in the comments.

Who Will Lead the AI Race? Alibaba, DeepSeek, or OpenAI?

For years, the AI industry was dominated by Western tech giants like OpenAI, Google, and Microsoft. If you wanted access to the most powerful AI models, you either had to pay a premium or rely on whatever limited open-source alternatives were available. But in the past year, China has emerged as a serious contender, with Alibaba’s Qwen 2.5-Max and DeepSeek’s AI models challenging OpenAI’s dominance.

With three major players now in the spotlight—OpenAI, Alibaba, and DeepSeek—the big question is: Who will have the biggest impact?

Alibaba’s Qwen 2.5-Max: The Biggest Game-Changer?

1. Open-Source Availability: AI for Everyone, Not Just Big Tech

Unlike OpenAI and DeepSeek, which still maintain some level of exclusivity, Alibaba has taken a bold approach—releasing over 100 models from the Qwen 2.5 family as open-source.

This move is significant because:

  • It allows smaller businesses, researchers, and developers to build AI-powered applications without massive budgets
  • It accelerates global AI innovation, reducing reliance on Western AI monopolies
  • It levels the playing field, as anyone can access and customize Qwen’s models

Meanwhile, OpenAI’s GPT-4 and GPT-4o remain closed-source, limiting their accessibility. DeepSeek has released open-source models, but their reach is still largely within China and research communities.

If Alibaba continues to expand its open-source offerings, it could fundamentally shift the AI industry, much like how Linux transformed software development.

2. Performance vs. Cost: Who’s Winning the AI Benchmark Battle?

Performance benchmarks suggest that Qwen 2.5-Max is outperforming both DeepSeek-V3 and OpenAI’s GPT-4 in various AI tasks—from handling complex queries to multilingual processing.

But cost-effectiveness matters just as much as raw power. DeepSeek’s AI models are designed to be highly efficient, making them cheaper to run. This could attract businesses looking for powerful AI solutions without excessive computational costs.

OpenAI remains the leader in brand recognition and market presence, but its pricing model remains a concern for many. If Alibaba or DeepSeek can offer similar capabilities at a lower price point, OpenAI may need to rethink its strategy.

3. Multilingual AI: Qwen 2.5-Max Breaks Language Barriers

Most AI models are English-centric, which limits accessibility for non-English speakers.

  • Qwen 2.5-Max supports 29 languages, making it one of the most globally accessible AI models.
  • DeepSeek’s reach outside China is still uncertain, though it has strong backing domestically.
  • OpenAI’s ChatGPT is still largely optimized for English, with limited performance improvements in non-English languages.

For businesses and governments in Asia, Africa, and Latin America, Alibaba’s multilingual AI could be a game-changer. The more localized an AI model is, the more valuable it becomes for regional markets.

4. Enterprise Adoption: Who’s Getting Integrated Faster?

It’s one thing to build a powerful AI model—it’s another thing to get real businesses to use it.

Alibaba’s Qwen AI models are already being adopted in industries like:

  • Automotive (for AI-powered driving assistants)
  • Banking (for financial analysis and chatbots)
  • Retail (for customer service and product recommendations)

DeepSeek, while innovative, lacks major enterprise partnerships outside research institutions. Meanwhile, OpenAI’s models are widely used in Western markets, but its expansion into Asia and emerging economies remains slower compared to Alibaba.

If Alibaba can secure more industry adoption, it could become the go-to AI provider in non-Western markets.

5. Competitive Pressure on OpenAI: A Wake-Up Call?

For the longest time, OpenAI had no real competition. But with Qwen and DeepSeek gaining traction, the AI race has become a lot more unpredictable.

  • OpenAI is now rushing to release GPT-4.5 or “O3” sooner than expected, likely in response to competition.
  • If OpenAI doesn’t adjust its pricing, accessibility, or model capabilities, it risks losing users to cheaper and more open alternatives.

This is a critical moment for OpenAI—does it continue with a closed, premium AI model, or does it shift towards more affordability and openness?

Final Verdict: Who Will Have the Biggest Impact?

Biggest Short-Term Impact → Alibaba’s Qwen 2.5-Max

Why? Open-source availability, multilingual AI, and real enterprise adoption make it the most widely accessible AI model right now.

Biggest Long-Term Disruptor → DeepSeek

Why? Its cost-effective, research-driven approach makes it a dark horse in this race. If it expands globally, it could seriously challenge OpenAI and Alibaba.

Most Stable Market Leader → OpenAI

Why? It still holds the largest brand recognition, but will need to adapt quickly to remain competitive in a rapidly evolving AI landscape.

What’s Next?

  • Will OpenAI lower its pricing to compete with Alibaba and DeepSeek?
  • Can DeepSeek expand beyond China and into global markets?
  • Will Alibaba continue its open-source strategy, or will it tighten access in the future?

No matter what happens, one thing is clear: The AI market is no longer dominated by just one company. And that’s good news for everyone.

ChatGPT, CoPilot, Gemini, Grok, Perplexity, Claude & DeepSeek: Which One Should You Choose?

Here’s a breakdown of the main features to help you decide which AI tool suits your needs:

1️⃣ ChatGPT (OpenAI)

✅ Versatile and reliable: Perfect for writing tasks, explaining technical concepts, and generating or debugging code.

✅ Supports multiple languages: Works well even in non-English languages, including Malay.

✅ Developer-friendly: Offers a stable API, making it ideal for building applications with AI capabilities.

❌ Paid plans for heavy use: A subscription is required to unlock more advanced features or handle large-scale tasks.

2️⃣ Gemini (Google, formerly Bard)

✅ Live search integration: It’s directly connected to Google Search, making it great for delivering the most up-to-date information.

✅ Fact-focused: Excels at answering general knowledge questions and providing factual details.

❌ Context sensitivity can be inconsistent: Sometimes, struggles to grasp complex or nuanced questions fully.

3️⃣ CoPilot (Microsoft, powered by GPT-4 Turbo)

✅ Productivity-focused: Fully integrated with Microsoft Office tools like Word and Excel, making it a powerful assistant for work-related tasks.

✅ Seamless Windows integration: An excellent choice for Windows users who need AI within their workflow.

❌ Access requirements: You’ll need a Microsoft account; in some cases, access may be restricted to specific users or organisations.

4️⃣ Grok (X, by Elon Musk)

✅ Social media-savvy: Specially designed for users of X (formerly Twitter), offering features to analyse and follow social media trends.

✅ Trend-aware: Focuses on staying current with the latest online conversations.

❌ Still evolving: As a relatively new tool, it lacks the maturity and extensive testing of more established platforms.

5️⃣ DeepSeek AI

✅ Specialist in technical fields: Excellent for research, coding, and solving complex mathematical problems.

✅ Completely free: A budget-friendly option for students, researchers, or anyone exploring technical AI applications.

❌ Limited advanced features: Compared to tools like ChatGPT or Gemini, it doesn’t offer comprehensive capabilities or integrations yet.

6️⃣ Perplexity AI

✅ Great for fact-checking: It can reference the internet in real-time, making it highly effective for quick research or verifying information.

✅ Focused research tool: Handy for researching specific topics and finding credible sources.

❌ Can lack deeper understanding: Tends to compile data rather than analyse or interpret it meaningfully.

7️⃣ Claude (Anthropic)

✅ Contextually intelligent: Known for its ability to generate high-quality text and clearly understand the context in conversations.

✅ Safe and ethical: Less prone to producing harmful, biased, or inaccurate content, making it a reliable option.

❌ Less recognition locally: Not as widely known or used in Malaysia, though its features are robust and competitive.

How to Choose the Right AI for Your Needs

  •  For writing, coding, or creative projects, → Go for ChatGPT or Claude for their depth and versatility.
  •  If you need real-time information or up-to-date research, → Consider Gemini or Perplexity.
  •  To boost productivity at work with office tools, → Use CoPilot, which works seamlessly with Microsoft applications.
  •  Social media enthusiasts → Try Grok, designed to help navigate trends on X (Twitter).
  •  For technical problem-solving or coding on a budget, → Choose DeepSeek AI.

Each tool has unique strengths, so the choice depends on your specific use case and preferences.

The ChatGPT Millionaire — Book Review

By Mazlan Abbas

Making Money with AI

The ChatGPT Millionaire

Not a member? Click here to read this story for free.

Having dabbled with ChatGPT before, I approached “The ChatGPT Millionaire” with a mix of skepticism and curiosity.

While I’ve had my share of exposure to AI and its utilities, this book promised to offer new insights and methods for monetizing this tool, particularly with GPT-4.

Here’s my take on whether it lives up to its promise.

The ChatGPT Millionaire

Surprisingly Insightful!

From the start, the book emphasizes creating value with minimal effort — a proposition too tempting to ignore.

The idea of generating passive income by leveraging AI is not just enticing but revolutionary in today’s gig economy.

What struck me was the simplicity of the guide. It’s as if the author is holding your hand and walking you through each step with ease.

“Can I really create a sustainable income source this easily?” I wondered initially.

But as I delved deeper, the book’s pragmatic approach and clear, actionable steps dispelled my doubts.

It’s not just about making quick money; it’s about smart, efficient work capitalizing on the current market gap in AI utilization.

Practical and Actionable

One of the book’s strengths is its practicality.

The section on impressing clients by delivering high-quality work at lightning speed resonated with me.

As someone who values efficiency and quality, I believe the strategies outlined here are game-changers.

For instance, the guide on creating engaging content provides not just the ‘how-to’ but also the ‘why’ — a crucial aspect often overlooked in similar guides.

As I experimented with the “Act as” prompts provided, I was amazed at the versatility and adaptability of GPT-4.

The prompts are not just instructions but keys to unlocking the AI’s potential in various niches, from writing to coding to social media management.

Beyond Theoretical Knowledge

The book offers real-world applications and examples, making the leap from theory to practice seamless.

The narrative of becoming a “superhuman freelancer” isn’t far-fetched when you see the tangible examples and templates provided.

It’s empowering to envision oneself completing tasks with such efficiency and precision, thanks to AI.

Yet, it’s not all roses.

The author doesn’t shy away from discussing the limitations of ChatGPT, grounding the book in reality.

This transparency builds trust and sets realistic expectations—a crucial element in any guide that aspires to be actionable and reliable.

The ChatGPT Millionaire

The Verdict — A Must-Read for Aspiring Entrepreneurs

In conclusion, “The ChatGPT Millionaire” is more than just a manual; it’s a blueprint for tapping into an emerging market.

Whether you’re a seasoned freelancer or a newbie looking for innovative income streams, this book has something for you.

Its simplicity, depth, and practicality blend make it a standout resource.

While I entered somewhat skeptical, I emerged enlightened and excited about the possibilities.

This isn’t just a book; it’s a great way to transform how we approach work and income generation in the AI age.

The added bonus materials only sweeten the deal, offering ongoing value and a toolkit for success.

So, would I recommend it? Absolutely.

Whether you’re looking to enhance your productivity, expand your business, or just curious about AI’s potential, “The ChatGPT Millionaire” is a treasure trove of insights and strategies that promise to inform and transform.


Get the book The ChatGPT Millionaire from Amazon.

(This article is comprised of Affiliate links, which I will earn a commission if you decide to purchase)

AI in Writing – Cheating or Just Another Tool in the Toolbox?

Navigating the Nuances

AI’s Role in Polishing My Prose

Photo by Anita Austvika on Unsplash

AI and the Art of Writing: A Non-Native Speaker’s Perspective.

The integration of artificial intelligence (AI) in our daily lives is a debate that has only grown more intense with time.

From driving cars to predicting weather, AI’s reach seems limitless.

But when it comes to the sacred act of writing, should we lean on AI for assistance, or is that crossing a sacred line of human creativity?

As someone who’s grappling with these questions personally, I’ve got some thoughts to share.

First off, let me be clear — I align with many writers who believe that using AI to write an entire article with just a prompt isn’t the way to go.

The essence of writing, for me, involves a deep personal engagement with the topic, a process that shapes thoughts into words in a way that’s inherently human.

To simply feed a prompt to an AI and let it churn out content feels like a shortcut that bypasses the soul of writing.

But here’s where my perspective might diverge from some purists — I use AI to assist in my writing.

Now, before you raise your eyebrows, hear me out.

English isn’t my first language, and I’m learning to master it.

This journey is filled with nuances, idioms, and syntactical structures that are sometimes baffling.

In my quest to express myself clearly and fluently, I’ve found AI to be a valuable ally.

When I write, the initial draft brims with my thoughts, raw and unfiltered.

It’s me at my most genuine, but it’s also me at my most vulnerable, grappling with a language that isn’t my mother tongue.

After pouring my thoughts onto paper, I turn to AI, not to rewrite my content but to refine it.

The essence, the core ideas, and the originality – all remain untouched, sourced from the wellspring of my creativity.

Now, you might wonder, is that wrong?

Is it cheating, or is it simply using available tools to bridge a linguistic gap?

To me, writing is an expression of the self, an art form that is as personal as it is universal.

When I use AI, it’s a bit like a non-native painter using a more advanced brush to bring their visions to life.

The painting’s concept, its emotion, and its message remain the artist’s own.

The brush is merely a tool to help realize that vision more vividly.

However, I recognize the other side of the argument.

Writing should maintain a human touch, they say.

It should reflect the imperfections, the quirks, and the unique style of its creator.

By involving AI, do we risk sanitizing writing to the point where it loses its individuality, its human essence?

I ponder this, especially when I see AI-written pieces that lack warmth or personal insight.

Yet, I believe the answer isn’t a blanket rejection of AI assistance but a balanced approach.

AI can help polish grammar, enhance clarity, and even suggest ways to make the prose more engaging.

But it should not – and cannot – replace the human experience, the personal stories, and the authentic voice that make writing resonate.

Consider this: every tool, from the quill pen to the word processor, has influenced how we write.

Yet, at their core, these are just tools – they don’t create art on their own.

Similarly, AI is a tool, albeit a sophisticated one.

It can assist, but the spark of creation? That remains distinctly human.

So, to those who argue that AI has no place in writing, I offer this perspective: AI is not a replacement but an enhancement, a means to bridge gaps and elevate expression.

For non-native speakers like me, it’s a way to communicate more effectively, ensuring our voices are heard and our messages understood.

In the end, isn’t the goal of writing to connect, to convey, to communicate?

If AI can help us do that more clearly, more powerfully, then perhaps it’s not the enemy but an ally.

After all, the essence of writing – the ideas, the passion, the message – will always come from the human heart and mind.

Let’s not shun AI in writing outright.

Instead, let’s find ways to use it responsibly, ensuring that it enhances rather than eclipses our human touch.

By doing so, we honor both the tradition of writing and the potential of technology, weaving them together in a way that enriches our expression and our understanding of each other.

Question to ponder — Is using AI … cheating or just another tool in the toolbox?


The Sins of Writing Using AI (ChatGPT) — Can You Forgive Me?

There are times I get the help of AI. Is that a big problem?

Image created using Canva by Author

I was sitting in front of my computer screen, scrolling through yet another article post, hammering in the point that using AI or ChatGPT for our writing is a big no-no.

The frustration was real.

My mind was swirling with thoughts, “Really? Not even a little help?

It felt like being handed a paintbrush but told not to use any colors.

Here’s the deal — I get it, okay?

The idea of just tossing a simple prompt into the AI abyss and letting it spit out a full-blown article or book without a second thought doesn’t sit right with me either.

That’s not creativity; that’s just being lazy.

But here’s where my frustration bubbles up like a volcano on the verge of eruption: English isn’t my first language. Getting those perfect, grammatically flawless sentences down?

It’s like trying to navigate a maze blindfolded.

Hours, I tell you, hours spent wrestling with a 500 or 1000-word article, trying to make it sound just right.

And it’s not just about grammar.

Conveying what I want to say, filling my articles with the right emotions, the nuances, the color — it’s daunting.

So, here’s the forbidden thought: what if I draft my thoughts, outline my content, and then, just then, let AI or ChatGPT fine-tune it?

Is that really so wrong? A sin, even?

But what if the article turns out better? What if it actually gets read, appreciated, or — dare I say — earns something on Medium? Wouldn’t that be a good thing?

The thing is, some of my articles that had a touch of AI assistance received good responses and even earned a bit. Meanwhile, the ones I slaved over, pouring my heart and soul into every word without any AI help, barely made a ripple.

It’s enough to make you wonder, isn’t it?

Why is it that the ones I work hardest on, the ones I pour my everything into, just… flop?” I’d ask myself, staring at the screen, waiting for an answer that never comes.

And yet, there’s this lingering fear — what if a Medium publication outright rejects my work because it got a helping hand from AI?

Even if I’m the one steering the ship, drafting the content, and ensuring it reflects my voice, my ideas?

So here’s the confession, the moment of truth at the end of this tale of frustration and inner turmoil: this article, the very one you’re reading now, had a bit of help from ChatGPT.

Yeah, I said it.

With its assistance, I managed to shape my thoughts, polish my words, and hopefully, make this piece engaging and coherent.

Can you forgive me?

In a world where the line between human and machine creativity is becoming ever so blurred, maybe it’s not about who writes the article but the story it tells, the emotions it evokes, and the impact it has.

If using AI means breaking down language barriers and bringing our stories to life in ways we never could on our own, perhaps it’s not such a sin after all.

View at Medium.comView at Medium.comView at Medium.com

What If Steve Jobs Were the Founder of Favoriot Instead of Me?

How would he present the pitch?

Imagine Steve Jobs as the Founder of Favoriot, a Malaysian IoT Startup

[Note: Click this if you are not a Medium Partner Program member. It will give full access to this article.]

Imagine if Steve Jobs became the Founder of Favoriot (a Malaysian IoT Startup) and made his opening speech at Favoriot Worldwide Developers Conference 2024.

As we all know, his speech has a unique voice, tone, and similar messaging.

With the help of ChatGPT, I wrote a prompt for ChatGPT to act as Steve Jobs and be the Founder of Favoriot including giving his speech or pitch.

Image generated by Midjourney by Author

Let’s listen to what he has to say:

“Ladies and gentlemen, in a world where technology increasingly touches every aspect of our lives, there’s one frontier we’re just beginning to explore: the Internet of Things, or IoT.

Imagine a world where every device, every sensor, every heartbeat, and every breath is connected, creating a symphony of data designed to improve our lives.

This is not just a dream. This is the reality we’re building at Favoriot.

At Favoriot, we believe in a future where technology serves humanity, not the other way around.

We’re pioneering IoT solutions that empower individuals, businesses, and communities to connect and engage with the environment around them in ways never before possible.

Our platform is more than just technology; it’s a bridge between the digital and physical worlds, designed to enhance health, safety, and efficiency.

Imagine smart cities that optimize traffic flow in real time, reducing pollution and saving time.

Picture wearable devices that monitor your health, predicting potential issues before they become problems.

Consider agricultural sensors that ensure crops receive precisely the right amount of water, maximizing yield while conserving resources. This is the world Favoriot is creating.

But it’s not just about the technology; it’s about what we do with it. We’re here to start a revolution, one where every piece of data enriches our lives, making the world a more connected, more sustainable, and healthier place.

Our platform is built to scale, to evolve, to become the foundation upon which the future is built.

Innovation. Connection. Transformation.

These aren’t just words; they’re the pillars of everything we do at Favoriot.

We’re not just creating products; we’re crafting experiences, we’re shaping the future, and we invite you to join us on this journey.

The future isn’t waiting.

It’s here, and it’s powered by Favoriot. Let’s build it together.

Image generated by Midjourney by Author

What do you think about the speech? Please leave your comments. Thanks for reading!

Blogging with AI: How to Balance Between Authenticity and Efficiency

Photo by Andrea De Santis on Unsplash

Do you blog with the help of AI such as ChatGPT?

Are you ashamed of AI when writing articles?

Many have done this and what can’t you?

Embracing AI in the Blogosphere: A Double-Edged Sword

When ChatGPT first graced the digital world, it promised a revolution in content creation.

The allure of instant, AI-generated blog posts was undeniable, especially for those grappling with the complexities of technical subjects.

The initial excitement was palpable: feed ChatGPT a line, and voila, a full-fledged article appeared. Yet, this initial euphoria was short-lived as the drawbacks became evident.

These AI-crafted pieces, although informative, often lacked the personal touch and authenticity that resonate with readers.

They felt mechanical, a tad too perfect, and eerily similar across different platforms. This was the conundrum: AI’s efficiency versus the need for an authentic, human connection in writing.

The Authenticity Challenge: A Crucial Pivot

The true battle is for engagement and connection.

Authenticity isn’t just a buzzword; it’s the cornerstone of effective communication.

The realization that reliance on AI compromised this essential aspect marked a pivotal moment.

The content, though factual, missed the human element – the unique voice and perspective that make each blog distinct and relatable.

Photo by julien Tromeur on Unsplash

A Shift in Strategy: AI as an Enhancer, Not a Replacer

The journey led to a critical insight: ChatGPT’s role isn’t to replace the human touch, but to augment it.

The new methodology involved crafting the initial draft personally, pouring in unique insights and perspectives.

This approach kept the core of the content genuine and reflective of the author’s voice.

Grammar and stylistic nuances took a backseat in this phase, allowing for a free flow of ideas.

For example, if you have been reading my articles, you will know my personal nuances versus an AI-written article.

ChatGPT: The Virtual Copywriting Assistant

Post the creative outpouring, ChatGPT’s role transformed into that of a virtual copywriter.

It wasn’t about generating content anymore but enhancing what was already there. Through prompts like “Rewrite the blog in a more human manner,” the AI tool fine-tuned the draft – fixing grammatical errors, enriching the vocabulary, and improving overall structure.

The outcome? A well-crafted piece that retained its original essence while being polished by AI’s capabilities.

Photo by Mohamed Nohassi on Unsplash

The Best of Both Worlds: Originality and Polish

This synergistic approach redefined the role of ChatGPT in content creation.

It morphed from a creator to an enhancer, ensuring that the author’s unique voice was not just preserved but also elevated.

It’s akin to having a copywriter on standby, one that fine-tunes without overshadowing the originality of the content.

In Conclusion: Harmonizing Creativity and Technology

The journey with ChatGPT in the blogging realm underscores a critical balance: leveraging technology without sacrificing the human element that is at the heart of storytelling.

While ChatGPT may not spark the initial creative flame, it certainly can fan it into a stronger, clearer, and more engaging narrative.

Photo by Andrew Neel on Unsplash

Key Takeaways

  1. AI as a Tool, Not a Crutch: Use ChatGPT to enhance, not replace, your unique voice.
  2. Balance Is Key: Harmonize your authentic content with AI’s ability to refine and polish.
  3. Evolution Over Revolution: Gradually integrate AI into your writing process; let it evolve naturally.
  4. Preserve Your Voice: Always start with your own insights and perspectives – the essence of your blog.
  5. Embrace Technology Wisely: Use ChatGPT as a means to elevate your writing, not as a shortcut to generate it.

This reflective journey, enhanced by ChatGPT, serves as a testament to the evolving relationship between human creativity and artificial intelligence in the realm of blogging.