Published 05 Jul, 2023 03:16pm

The threat of maliciously directed AI through human agency exists today

AURORA: How far advanced is Pakistan in terms of the application of AI?

JAWWAD AHMED FARID: In terms of practical application, AI has multiple areas. Predictive capability is one; an example would be what will be the price of oil tomorrow? Or is this the right word to use? ChatGPT in the simplest of terms is nothing more than what I would call a word guesser. 

A: A word guesser?

JAF: Yes, it guesses the next word. Of course, it has a huge data set; including all the stuff that is on the internet in various shapes and forms. At the simplest level, ChatGPT has an internal model that compresses all this knowledge into structures – and each structure says: if you have ABCDE&F as the try words, what is the best guess? Answer: The highest probability is G. And based on what it has seen in the past, it looks up the relevant context and gives you the next one and repeats this process. Someone fed it The Great Gatsby and asked it to write the next chapter in the same style as the author – and it did, basically by looking at different writing styles. Ultimately, the essence of most AI models is that there is a world (our world) that exists and that it wants to model, and then there is an internal model – the machine or the system or the technology – and by using this internal model, it extrapolates. This is what ChatGPT and most AI models do. They look at this world, represented sometimes by equations, by data or by numbers, and they build an internal model. Sometimes we understand how that internal model is built because we understand the mechanics and the logic of it – it is an explainable internal model. But sometimes we don’t understand the internal model and we call this a black box. In terms of predictive application, I know of two companies in Pakistan that have done this. One is TRG, which is a technology company listed on Pakistan Stock Exchange. They work in the call centre space and they have developed a software that basically matches their customers with their agents – the end objective is to have a number of different scenarios to ultimately help them to determine which agent to use for which customers.

A: So it basically addresses a pain point?

JAF: Yes. What it is saying is that given the profile of this agent and given the profile of this customer – and given these two parameters, can I optimise my conversion rate. The other company is Engro Digital (I think they have been acquired by another local company). They run a large industrial plant that throws off a lot of data with respect to floor capacity, temperature, output, operating conditions, challenges, red flags, sensors and so on. They built an AI model – think of it as an operating system for large manufacturing plants – based on historical patterns and results which they apply to their current or new data, with the end objective of determining what is the required action that needs to be done. 

A: How many companies in Pakistan are doing this sort of thing?  

JAF: This kind of modelling can be used across multiple areas – and I am going to say something that is controversial – a large part of what we call AI today is just maths.

A: Just maths, but beyond human capacity to calculate in the same timeframe? 

JAF: It is beyond human capacity in the sense that it is brute-force maths.  

A: Brute force? 

JAF: It means I have thrown a lot of resources at it. It is often said that given enough monkeys and enough typewriters one can eventually reproduce the complete works of William Shakespeare. So with brute-force maths – if I throw enough hardware, memory, processing power and data into a problem, I will ultimately figure out the way to solve the problem. The interesting part of AI is that there are two sets of problems. Solvable problems; those that from a computation perspective are in a space that defines them as finite. If I have enough processing power, enough CPU, memory and resources, I can walk through that space and find a solution that works. Then there are the not solvable problems and I can throw an infinite amount of computing power at them, but they still cannot be solved, because their space is too large.

A: The space is too large?

JAF: In a game of chess, the number of moves, according to current computational power, is 15 to 20 maximum. If you can look beyond 15 moves you are playing chess at the level of a grandmaster; if you can only look beyond three moves, you are playing at a level of a newbie. Anything beyond 21 moves is when the space starts growing at an exponential rate; there are so many pieces, options and combinations that you cannot possibly eliminate all the moves at the end of the game. This is a not solvable problem.

A: But could AI eventually develop the capacity to solve a not solvable problem?

JAF: No, it is just maths.

A: So what determines whether a problem is solvable or not solvable comes from the capacity of the human mind? 

JAF: Partly from the human mind and partly from how the problem is stated and structured. If a problem is defined in a solvable state, you can solve it. In simplistic terms, what I am saying is that there are certain problems where there is a mathematical model or equation – and when we say that a problem has been solved by AI, what we find is an approximation to that mathematical model because you are able to express this problem in a form that can be solved. But there are problems where either the problem formulation is not clear or the mathematical model does not exist. In 1997, a programme was developed called Deep Blue to beat Garry Kasparov at chess, and the way Deep Blue beat him was by mixing and matching five or six different approaches to the solvable problem. It was this combination or hybrid approach that led to a possibility that Kasparov could be beaten by a computer. The way it works is step by step. There is a problem and then there is AI which tries to solve that problem. The missing piece in the background – and is not always available – is that there has to be some level of prompting that allows AI to get to the solution. 

A: But can’t AI eventually develop its own prompts?

JAF: Maybe at some point in time, but not yet.

A: Recently several AI labs have been set up in Pakistan. What is their role?

JAF: This is an area I call optimisation and classification. Take diabetic retinopathy as an example. The old diagnosing process required referral to a hospital. A camera would then take a photo of the eye, after which the patient had to wait several days for the results. About 11 years ago a team of students at the National University of Sciences & Technology (NUST) wrote software that would take an image of the eye and within 30 to 60 minutes the diagnosis was made. They did not roll this out immediately. They tested this on a lot of images with a lot of ophthalmologists and radiologists and the error rate turned out to be less than one percent – significantly lower than the error rate of a radiologist. They used technology to process an image and then detect the occurrence of a disease at a higher level of accuracy and at a faster processing rate than what it would take a human person to do.

A: Who determines the research priorities in Pakistan?

JAF: Research priorities are essentially set by the people hired to supervise students – professors with specialised skillsets and expertise in the areas they work in. They either create a lab or have research students work on problems that interest them, so to a large extent the research priority and mandate of AI labs is set by the capacity of the professors able to assist students in conducting new research. Often there are issues concerning data. In the example of diabetic retinopathy, there had to be patient data available in order to train the model, and a flow of patients to test the results; without data, you cannot train a model. AI has multiple fields and applications –and vision and image processing is one of them. There is a lot of interest in this space because it essentially maps into vision and vision is a hot area these days.

A: Why is it a hot area?

JAF: Self-driving cars. There is no driver, so the car has to be able to make sense of the world around it. Vision is about sensors. A lot of interesting stuff has happened in this area in the last couple of years because a lot of funding is going to it. We talked earlier about forecasting and prediction and which is linked to mathematical modelling. Then there is deep learning and machine learning which is also based on mathematical modelling. Then you have natural language processing (NLP) which is really interesting because it helps solve the accessibility problem. 

A: Meaning?

JAF: Let’s say you have a huge data set from which you want to extract some answers. These answers can be either expressed in computer language or in natural language. NLP is much easier. I don’t have to remember the syntax code sequel; I don’t even have to understand the database. I just have to input the data and ask it to give me the answers. So accessibility – and this is why NLP is hot. For NLP, the use case is six billion people at some point in time. 

A: Where does Pakistan stand compared to the rest of the world in terms of AI research?

JAF: We are not where India or even Sri Lanka is.

A: Any reason for this?

JAF: They have better schools, more budgets, more focus, more PhDs. It is a numbers game and we don’t have enough numbers. Hardcore computer science is a function of teaching capacity and not everyone has the background. However, compared to 20 years ago, we have more PhDs with a background in AI, robotics, image building, machine learning and data science. CS is about applied mathematics and, in a given population set, only a handful of people are gifted in maths, and only a few have the aptitude for computer science, and of these few, not all go into computer science. Another challenge is that to increase the level of research, we need more PhDs in computer science and, in Pakistan, of the intake we get, a lot of them stop at the Bachelor’s level. 

A: Why?

JAF: They start working. The expectation from their family or from themselves is that having paid for an education, it is time to start working – and not all of them come back for a Master’s programme, and if they do, they opt for business administration or something else. If we want more PhDs in computer science, we need more students doing their Master’s and therefore more funding. 

A: Does the government have a role in facilitating AI-based research?

JAF: My pet peeve is the cost of technology. Duties on hardware have to be reduced otherwise we cannot do technology. We also need more funding for Master’s programmes in Pakistan and PhDs outside Pakistan. The rest we can figure out ourselves.

A: Do academia, industry and government talk to each other about these issues?

JAF: There are some conversations. From an industry perspective, they are mostly agenda driven rather than by a career plan – like we need lower taxation. From the academic perspective, they are driven by policy engagement.

A: Potentially how dangerous is AI to humankind?

JAF: I would say that for the next 15 to 20 years there is no threat in terms of a self-aware AI causing us harm. But the threat of maliciously directed AI through human agency exists today. 

A: Ten to 15 years sounds pretty imminent.

JAF: The people making a fuss about the dangers of AI have strong vested commercial interests in developing AI themselves. Elon Musk wants to put a stop to AI development because he wants to own it. He funded OpenAI, and then he said AI would kill humanity and he got his friends and colleagues to say AI should be put on hold. Then when Sam Altman, the founder of OpenAI, refused to, Musk set up his own AI foundation; what does this tell you? The debate over whether hard intelligence or singularity is possible or not has been going on for about 50 years and, mathematically speaking, people much more qualified than I am, believe that it is not.

A: Yet, you are also saying it may be possible in 10 to 15 years.

JAF: We have different versions of our future world, part of it is fantasy, part of it is romanticism and part of it is reality. The challenge is that we cannot differentiate between the three. The realistic part of me says that given the direction and all the possible paths that work at this point in time, in the next 20 years, there could be one path that will hit self-aware singularity.

Jawwad Ahmed Farid was in conversation with Mariam Ali Baig.
aurora@dawn.com for feedback.

Read Comments