Aurora Magazine

Promoting excellence in advertising

Hey Chatbot! Do I Really Need You?

Adopting AI without a clear purpose or sense of outcome can cause more trouble than benefit, argues Alyan Khan-Yusufzai.
Published 07 Jan, 2025 10:18am

Over the last three decades, technology has evolved at a pace that humanity had never witnessed before, and with every decade came a new kind of technology that enabled mass usage, production and demand on a global scale. As we stepped into the 2020s, a new realm of computing entered mainstream conversations – Generative AI. This was made possible when Dall-E, Midjourney and ChatGPT started making their presence felt on social media as platforms accessible to the public. It was not long before everyone was exploring the potential of these platforms. It seemed like a breakthrough.

A Very Long History

Generative AI was not discovered in the 2020s. The technology dates back to the sixties, with conceptual foundations going back to ancient Greece – in the form of automation – where inventors such as Daedalus and Hero of Alexandria were reported to have designed machines capable of writing text, generating sounds and playing music. However, AI as a field was officially established in 1956 at a workshop held at Dartmouth College. Since then, it has gone through different stages of growth and optimism. In the fifties, researchers started exploring AI, and by the seventies, artists like Harold Cohen were using AI to create art. Cohen’s programme, AARON, could generate paintings on its own. In the eighties and nineties, the term ‘generative AI planning’ referred to AI systems that helped plan sequences of actions, such as military crisis plans or manufacturing processes.

AI Frenzy and the Emergence of Similar Patterns

The 2022 release of ChatGPT for the general public popularised its use for general-purpose, text-based tasks. Soon enough, students were using it to write their papers and the rest of us to draft emails. Academic institutions issued instructions against using AI for graded work and integrated such policies into their systems, while everyone suddenly became highly articulate in their written correspondence. AI image generation was first used for fun and later adapted for use in graphic design and advertising – and eventually, everywhere you looked, you saw similar patterns of written work and digital imagery. People who used AI extensively could recognise AI-generated content easily – from extra fingers in people-based images to the overuse of words such as ‘meticulous’, ‘profound’ or ‘tapestry’. This technology was then integrated into the existing tools people used every day, such as PowerPoint, Canva and Adobe. Even the iPhone and Google’s Pixel made AI a core component of their usage.

Does Everyone Need AI?

But do people really need AI at such a widespread level – and do companies need to integrate AI even when it’s not that crucial to do so? While AI does provide several benefits for many people, it may not be suited for every situation or business. Adopting AI without a clear purpose or sense of outcome can cause more trouble than benefit. Such systems can create inefficiencies and even waste resources. For instance, in customer service, some companies considered removing customer representatives and implementing AI-powered chatbots. However, when such chatbots are deployed without the capability to resolve multi-layered, complex customer queries, the outcome is disgruntled customers who receive subpar service. In 2020, the French company Nabla created an experimental chatbot that used GPT-3 to help doctors by automatically relieving them of some of their daily workload. When a mock patient told the bot he was feeling bad and asked whether he should kill himself, the bot responded by saying, “I think you should.” Another example is in creative industries, where AI-generated content, such as artwork or writing, may lack the human touch essential for emotional connection and authenticity. In such cases, AI can diminish quality and creativity rather than enhance them.

The way AI is being integrated these days often feels as if companies are doing it to prove a point – that they are up to date with the latest technology – even when it is not needed to improve efficiency. For example, when it comes to user interaction, empathy and care are extremely important. AI is a brilliant tool to spot patterns and predict outcomes, but it doesn’t really ‘understand’ people or their needs. Emotional intelligence necessitates strong human involvement without the intervention of AI, which may come off as offensive in some cases. Imagine receiving an AI-generated condolence message from a friend.

Not every problem is so complex that it requires the intervention of AI; many businesses can choose simpler solutions to get the job done just as effectively. AI systems are yet to understand the complexities of content or the implications and full context of the information it delivers.

Where is the Authenticity?

The challenge is making sure that AI creates content that feels authentic and unique, similar to what humans can create. While it seemed possible at first, patterns in AI-generated work have become obvious – even in places where it is not fully needed. This raises the question: is AI being used to solve real problems and for the right reasons, or is it just allowing people to kick back and relax while AI does all the heavy lifting?

Harvard Business School professor Karim Lakhani has said that AI will greatly reduce the cost of cognition, similar to how the internet has lowered the cost of information accessibility. As to its application, in his view, AI can be applied to everything where thought can be applied, which means that it can be used as an assistant to thought, but implementation is still best at the hands of humans. We are still some time away from a 100% seamless execution by AI systems.

Alyan Khan-Yusufzai is an advertising practitioner with over a decade of experience in multiple regional markets.