AI-based tools have taken us by surprise and although we have been hearing about the use of AI for years, they always seemed as if they were from a faraway future. But they are here now.
For the uninitiated, AI is a simulation of human intelligence; through it, your machine learns how to store and perceive information like a human, and then implement it.
For example, a software called Murf uses text-to-speech capabilities to create the most natural-sounding human voices. Another one of my favourite examples would be Whisper AI, a life-changing tool for those who might not know a language or are hard of hearing. It doesn’t offer responses but is capable of deciphering countless accents and jargon.
And then there’s ChatGPT… What’s the Deal with ChatGPT?
ChatGPT has created quite a furore. The impact of its creation has been monumental! The creator of DALL-E – an AI-based image structuring tool – OpenAI, built the software and made it open for public use on November 30, 2022 (although it still isn’t available in Pakistan). Five days later, it had over one million users! People began using it to create marketing content, codes, screenplays, etc. And now, you may even get ChatGPT to design and provide a step-by-step blueprint for creating a business. I know, it’s scary!
But since it is software that learns from humans, it is bound to develop inevitable loopholes. Hear me out: although the software is geared to provide solutions to queries, the creators say it doesn’t dabble in politics or display socioeconomic biases. Wrong! What are humans if not defined by their opinions? ChatGPT has shown a pro-environmental stance and a left-leaning orientation in its answers when asked to pick a side between two political statements. And since it’s trained on pre-existing algorithms, it also displays biases: on a handful of accounts, when asked to describe people, it has alluded to the notion that coloured and female scientists were not as capable as their male, white counterparts.
It is important to be mindful of the limitations and potential negative consequences of AI technology, and ensure that it is developed and used in a responsible and ethical manner. ChatGPT could reinforce and amplify existing biases in society and result in discriminatory outcomes, generate misleading information and manipulate public opinion. The language model collects and stores large amounts of personal data, raising privacy and security concerns. Overreliance may result in the loss of critical thinking and decision-making skills.
There is also a fear that in certain industries it may result in job losses and reduced job security. It may also have unintended consequences that are difficult to predict and control.
In industries that are focused on customer service AI language models like ChatGPT can be used to improve the speed and efficiency of customer support, reducing response times and improving customer satisfaction.
In marketing and advertising it can be used to generate personalised and relevant advertisements and recommendations for consumers.
In healthcare, AI can assist in medical diagnosis, treatment planning, and drug discovery, improving healthcare outcomes and accessibility. However, it is not capable of replicating the complex decision-making process and human judgement involved in medical diagnosis, and its outputs should not be used as a substitute for professional medical advice. Reliance on AI for medical diagnosis could lead to incorrect or delayed diagnoses, which could have serious consequences for patients’ health. It is always best to consult with a licensed healthcare provider for any medical concerns.
In finance these language models can be used to automate financial tasks, such as risk assessment and fraud detection, improving the speed and accuracy of financial processes.
In education they can assist in teaching and learning, providing personalised and adaptive learning experiences.
In news and media, ChatGPT can be used to generate news articles, summaries and reports, increasing the speed and scale of news production.
ChatGPT can be integrated into government websites to provide citizens with quick and accurate answers to frequently asked questions. It can be trained on government data to provide decision-makers with relevant information and insights for policy making and to translate government documents and websites into multiple languages, making government information accessible to a wider audience. ChatGPT can be trained on government data to identify patterns and trends that can inform policy-making and resource allocation. However, it is important to note that the use of ChatGPT by the government should be transparent, secure, and in compliance with privacy and data protection regulations.
In the never-ending journey of human progress, the question that many are still asking is, is it a friend or a foe? We will just have to wait and see.
Jehan Ara is Founder & CEO, Katalayst Labs.
Comments (0) Closed