Published 05 Mar, 2024 10:34am

AI and the Yellow School Bus

I love school buses. I find their existence profound and their journey magical. Especially the yellow ones found all over the United States. The concept of taking students to school pre-dates what we see today, but did you know that this yellow school bus was not introduced until the 20th century? The development of the modern-day school bus is attributed to Frank W. Cyr, a professor at Teachers College, Columbia University. In 1939, Cyr organised a conference that established design and safety standards for school buses. This led to the standardisation of the iconic yellow colour as well as their size and design in the United States, and eventually in most parts of the modern world.

Like any vehicle, there have been accidents involving school buses that resulted in injuries and fatalities – and this is the most regulated vehicle ever. School districts, transportation departments, law enforcement agencies and regulatory authorities, all prioritise safety protocols, maintenance inspections, background checks for drivers, and stringent regulations to ensure the safe operation of these school buses. Then, of course, we have the moving time-bomb pickup wagons that transport students to school in countries like Pakistan, and despite several tragic instances resulting in the loss of life, we still see these wagons all around us.

I also love AI. What a magical and profound presence. And like the school bus, it transports our future to a place of learning. Yet, unlike the school bus which transports a fixed number of children and is still driven by someone with a heart and soul, AI is a self-driving car with a mind of its own; one that learns at the speed of light and is taking us to an unknown destination. This begs the question… who is driving?

Everyone and no one! This answer scares me. Tech giants, including Google, Microsoft, IBM, and OpenAI, have developed ‘ethical’ AI principles, guidelines and frameworks aimed at addressing responsible AI development, deployment, governance, transparency, accountability and societal impact. Governments and regulatory bodies worldwide are increasingly focusing on AI governance, ethics, laws, regulations, compliance and accountability to address the ethical, legal, social, economic and security implications of AI technologies. Various countries and international organisations, such as the EU, have proposed policy frameworks and initiatives on AI ethics, governance, human rights, data protection, security and trust. Yet, here I am, petrified, thinking this is not enough. The world is more divided today than ever before and it doesn’t take a genius to figure out that our present world leaders are incapable of achieving anything significantly ‘good’ by collaborating.

Ask yourself this. How did you feel when X (formerly known as Twitter) or Meta flagged one of your posts for violating their policy because it was content made in solidarity with the Palestinians? I am not taking a political stance here; just making a point. Three years ago, Adam Bensaid’s TRT World report, called Workplace and Algorithm Bias Kill Palestine Content On Facebook and Twitter, shed light on events that signify a longstanding and institutionally accepted challenge to the values of free speech. Although some people may contend that it is not acceptance but rather the unchecked growth of Big Tech that is the issue, the implications of social media companies neglecting what is now recognised as algorithmic bias, and maintaining a silent stance (despite receiving complaints), are alarming. These tech giants are the ones ‘regulating’ AI governance, fairness, transparency and accountability.

As Dr Nekhorvich from Mission Impossible II put it: “Every search for a hero must begin with something that every hero requires: A villain.” Let’s assume AI is both the Chimera and the Bellerophon of our time. We need a public-private initiative that engages with civil society stakeholders, AI experts, lawmakers and regulatory bodies through consultation, dialogue, forums and other participatory processes to gather insights, values, and recommendations on AI ethics and governance. And we need that to happen yesterday.

The future is already here, but will we ever align on a comprehensive, global moral code for AI? I don’t think so, but we have to keep trying. This can only happen through a collaborative, multidisciplinary, multi-stakeholder, and global effort, aligned with human values, rights and societal priorities.

Public trust is key. We need to invest in education and awareness about AI and its capabilities and risks. The velocity of AI’s growth is unlike anything we have witnessed before. We need continuous engagement with key stakeholders to facilitate a responsive governance framework based on evolving technologies and social needs. And we need to do this without stifling technological advancements.

This is why I love the yellow school bus. It transports budding creative geniuses while mirroring the structured, regulated world we know. And I want to ride that bus on this unpredictable highway as we explore the realms of AI. Accidents may still happen, but at least we would be travelling with headlights and a compass – a comprehensive, global moral code. The urgency is as intense as the journey is profound. It calls for collaboration, awareness and the pursuit of equilibrium. In the words of Cyberdyne Systems Model 101 (or the T-800): “Come with me if you want to live!”

Umair Saeed is COO,
Blitz Advertising.umair.saeed@blitz.pk

Read Comments

Attitude Rather Than Grades

Until Pakistan’s institutes of higher learning acknowledge the critical need to take an entirely new approach to the education they are imparting, graduates will have to learn to unlearn to succeed in their chosen career paths, argues Atiya Zaidi.