[Original article published in The Malaysian Reserve (7 June, 2023) – “Does AI pose a risk to human jobs?” ]
While it is true that AI has the potential to replace certain tasks and jobs, it is important to approach this topic with a balanced perspective
SEVERAL industries have raised concerns about the encroachment of artificial intelligence (AI) as it generates tools to achieve comparable creative results, creating worries revolving around intellectual property (IP).
Although not all AI risks are as serious and concerning as killer robots or sentient AI, it is all fun and games until technology takes over our chances to become better employees and could also spell the end of the human race.
Hollywood screenwriters are currently on strike as they are concerned that robots and technology will take over their profession.
The Writers Guild of America (WGA) labour union representing 11,500 writers on May 2 started seeking higher pay and raising concerns about the emergence of generative AI such as ChatGPT in the creative industries in recent months.
Meanwhile, as reported by BBC News, AI “godfather” Geoffrey Hinton warns about the growing dangers from developments in the field as he resigns from his post at Google LLC, stating that some of the dangers of AI chatbots were “quite scary”.
“Right now, they’re not more intelligent than us, as far as I can tell. But I think they may soon be,” he told the media outlet.
The question remains, should we be worried about the advanced development of AI technology?
The Malaysian Reserve (TMR) reached out to an AI expert, who is also the Malaysian Research Accelerator for Technology & Innovation (Mranti) GM Dr Afnizanfaizal Abdullah, to provide insights on the matter.
He said when it comes to the critical risk of AI in terms of IP is the issue of inventions created or generated by AI itself as the patent office generally requires that an invention be the result of human ingenuity or a non-obvious step forward.
“Determining whether AI-generated inventions meet these criteria can be a legal and technical challenge. At the same time, AI systems can create, reproduce or manipulate copyrighted materials, such as music, images or written content, leading to potential infringement.
“For example, AI-generated content may replicate or modify existing works without authorisation. This also led to concerns about data ownership and privacy. The right to data used in AI applications and the protection of personal information have become significant considerations,” he told TMR.
Afnizanfaizal said while AI can be a threat to future employment, it can also bring more opportunities to the cohort as the technology will significantly impact repetitive jobs that give less value to the outcomes or productivity, especially in manufacturing, transportation, customer service and data entry.
Despite that, he said, AI can also create new job opportunities by enabling the development and deployment of AI systems, algorithms and applications as these technologies require skilled professionals to design, implement, maintain and oversee.
“The growth of AI-related industries and emerging fields may result in new employment opportunities. So, most importantly, we as humans need to augment ourselves with new skills and knowledge so that we are able to cope with new waves of technological advancement that may be affecting our professions,” he said.
When asked about which industries are negatively impacted by AI-related technologies, he mentioned that the manufacturing industry has experienced a significant impact from automation and robotics. This is because AI-powered machines and robots have already automated various production processes, resulting in increased efficiency, reduced labour costs and improved precision.
He added that AI-powered chatbots and virtual assistants have also become prevalent in customer service and call centres as AI systems can handle routine inquiries, provide automated responses and assist customers with basic tasks.
“Not only that, autonomous vehicles and drones are being developed and tested, potentially affecting jobs such as truck drivers, delivery personnel and warehouse workers,” he said.
Additionally, Afnizanfaizal mentioned that two other industries that may be involved in the long run are financial services and healthcare.
“AI-powered algorithms can analyse vast amounts of data, make predictions and automate decision-making processes. This has led to increased efficiency but also raised concerns about potential bias, transparency and the need for regulatory oversight,” he said.
Meanwhile, Afnizanfaizal reassured writers that they should not be worried about being replaced by ChatGPT or any other software. He emphasised that these tools act as catalysts for writers to gather points and facts more quickly, eliminating the need to search for them manually in other sources.
“The only way that writers could be replaced by their AI counterparts is that the content is purely on presenting the data and information without any additional facts that come from their own thoughts and experience,” he said.
Difference between Humans and AI
To compare the work quality between human beings and AI, he said humans possess unique cognitive abilities, intuition, creativity and emotional intelligence that AI systems currently struggle to replicate, while AI excels in certain areas where it can process and analyse vast amounts of data, recognise patterns and perform repetitive tasks accurately and quickly.
Taking the manufacturing or transportation industries as an example, he said, AI can monitor equipment and systems, analyse sensor data and predict maintenance needs or potential failures.
“This proactive approach helps optimise maintenance schedules, minimise downtime and reduce costs. However, human experts are needed to validate whether the outputs from the AI system are correctly predicted based on their experience,” he said.
When asked about ethical considerations and potential biases associated with AI algorithms and their decision-making process, the AI expert highlighted that it is an important area of concern, particularly because AI learns from data.
He said if the training data is biased or reflects existing societal prejudices, the algorithm may perpetuate those biases in its decision-making, and biases related to race, gender, age or other protected attributes can lead to unfair or discriminatory outcomes.
“Establishing clear lines of accountability is crucial, especially in high-stakes domains such as healthcare, finance or criminal justice. Even if not explicitly programmed to be biased, AI algorithms can inadvertently produce discriminatory outcomes due to biased data or flawed model design,” he said.
This, Afnizanfaizal said, can be seen in an algorithm used in hiring processes that may inadvertently discriminate against specific demographics if the training data reflects existing biases in the past hiring decisions.
“Thus, systems should be designed to treat all individuals fairly, regardless of their background or characteristics. Fairness considerations include avoiding disparate impact, promoting equal opportunities and accounting for the contextual factors influencing outcomes,” he added.
When it comes to the limitations of current AI technology, he said the major bottleneck is the infrastructure as AI development and operations often need high-end computational capabilities.
Furthermore, he said many cloud services provide sufficient computational efficiencies to run AI systems but require significant costs.
“The potential approach is integrating cloud services with physical hardware and introducing a hybrid cloud infrastructure. The reliance on large amounts of data makes AI systems vulnerable to data breaches and privacy infringements.”
IP Risks of AI System
Weighing in on the same matter, Dr Mazlan Abbas, a technology expert and the CEO of Internet of Things (IoT) company Favoriot Sdn Bhd, expressed that AI does pose risks to intellectual property (IP) if users are not mindful of how they input information into the AI system.
According to Mazlan, AI depends on the information it learns, and without that information, AI could not give the best output (pic source: iotworld.co)
This, he emphasised, is especially true when the AI system is owned by a third party. “For instance, take ChatGPT as an example, which is a service owned by OpenAI. If we input highly confidential information that has not been patented, there is a risk of that information being exposed and becoming prior art.
Consequently, we would be unable to patent our discoveries. “However, if the AI system is being owned by our company, we have better control of the information not to be leaked to external parties,” he told TMR.
When questioned about the potential threat of AI to employment in the future, Mazlan acknowledged that to a certain extent, AI will replace numerous jobs and companies may reduce the number of people they hire compared to before.
At this stage, he suggested that the human race may utilise AI technology as an assistant since it still requires human inputs and instructions to accomplish tasks.
However, he cautioned that with more advanced AI systems or chatbots, there may be a reduced need for human interaction, potentially leading to a greater threat to existing jobs in the future.
“It is already evident that AI technology is assisting in various current jobs, including writers, photographers, artists, voice-over artists, actors, graphic artists, copy-writers, and many others,” he said.