Published 16 Jan, 2025 11:21am

Restrict, Oversee and Keep Experimenting

“I will save time in research.”

“But it will still need to be fact-checked, link to link, word by word.”

“Give it the right prompts.”

“Will that stop it from hallucinating though?”

“How about we use it only to clean the language?”

“Perhaps. We can forget the team trying to learn how to punctuate then.”

“Audio transcribing for those long, long press conferences?”

“I’d like to see it translate ‘khalai makhlooq’ for our audience.”

The above are slightly exaggerated examples of how my co-editors at Dawn.com and I went around in circles while developing a policy on the use of generative AI in our editorial work. At the core of these discussions were two realities: AI cannot be ignored, and the uneasiness to trust something as valuable or sacred as the news is completely valid.

While both these opposing views can be addressed with carefully considered official guidelines, only a handful of news organisations across the globe have taken up the task of developing rules to integrate tech tools like ChatGPT and DALL-E so that their teams can test safely and cautiously.

A few conundrums may be standing in the way, one of them being that there are not enough people in the industry who have experimented, observed and understood the capabilities of the now continuously evolving generative AI. In Pakistan, where social media strategy is still not considered separate from editorial strategy in many newsrooms, where does one start?

1. The Need vs the Challenges

Step one is acceptance: acknowledge the need for an AI policy, regardless of the amount of usage. “At the minimum, newsrooms need to be constantly experimenting,” says Shahzeb Ahmed, editor for long forms at Dawn.com who has been carefully integrating AI into his work for more than a year through different design elements. “This is where the content creation ecosystem is headed; any publication or content creator not using AI will be at a significant disadvantage.”

The challenges we had to take into account while developing the policy for Dawn.com were plenty, but can be shortlisted to the three below for the sake of this article.

One: AI has limitations, including but not limited to providing false information.

Two: Are we even saving on manpower and time with so much oversight?

Three: Would the use of AI in drafting stories and headlines affect critical thinking, especially in nascent journalists?

Understandably, the aspect of saving time is widely appealing for a strapped industry working around the clock. But at this stage, where AI tends to hallucinate and generate false information, human oversight can end up taking as much time, or even more, to have the content in a publish-ready format. This is where a favourite quote of marketers comes into use: you have to spend money to make money. With AI, time needs to be spent now to find out its strengths and limitations, unique to different business models in the media industry, to potentially save time ahead.

Allowing the use of AI becomes trickier when it comes to the creative skills of the teams, as newsrooms invest time in developing and relying on journalistic reflexes. For example, a sub-editor using generative AI to come up with headlines may not be able to come up with something witty in mere seconds as the news breaks. Not to mention digital newsrooms don’t have the luxury of spending time making bot language sound human.

“We don’t see AI as taking over or reducing the role of our team members or negatively impacting their creativity or skills,” weighs in Jahanzaib Haque, chief digital officer and editor at Nukta, a new digital content platform with an AI policy that governs editorial usage. Quoting Nukta’s policy, he says the use of AI is meant to complement human skills. “We want to encourage the maximum amount of experimentation and use possible, and then deal with the questions that arise from such use,” he adds.

2. Use, Don’t Abuse

In wider conversations, a problem I picked up on with those eager to use AI is undermining the audience’s ability to realise the difference between human and AI-generated content. And while that may be true in some areas, such as staple text for explainer videos, it poses a problem for platforms where users expect nuance and smart contextualisation. So while every newsroom needs to be experimenting with AI and developing a policy on its use – as limited or wide as it may be – they should also take reader sentiments regarding their brand into account.

Abdul Sattar Abbasi, Dawn.com’s Managing Editor and resident AI expert, poses three questions for self-evaluation: if everyone can use these tools, what makes your newsroom unique? What’s your value proposition? How will you compete when your product looks the same as everyone else’s? As our internal policy shaped up, three commitments to the audience were kept in mind.

One: Transparency about the use of AI.

Two: Ensuring human oversight over all aspects.

Three: Taking responsibility for accuracy and fairness.

The next step was to have a set of questions that each team member could address, such as how do you want to use AI; is this the most efficient way of carrying out the task; is the material sensitive in nature? After multiple rounds of experimentation and extensive back and forth, generating images with AI is what most digital editors felt comfortable with. However, in our use so far, we’ve picked up that even the most minute detail, such as a tiny logo on a faraway silhouette on an AI-generated image, can be problematic. So at the risk of sounding like a broken record, human oversight again comes into play.

When it comes to text, however, the usage is more limited: AI at the moment at Dawn.com has been okayed for research, data crunching, headline experimentation (senior editors only) and repurposing content for social media (text, audio and visual). A disclaimer will be mentioned with all of the allowed usages. As the experimentation continues, both on the business and consumer end, the policy will be revisited every four months and updated as and when needed.

3. Share Your Policy

We started with a clear goal: there needs to be an official policy. We ended with a clear consensus: restricted use to begin with, extensive human oversight and constant experimentation.

As confusion and insecurity persist over the advantages of AI versus perceived or predicted ones, not acknowledging this new technology will only add to the apprehension. A policy goes a long way in helping the team, readers, contributors and the industry as a whole, especially when many AI tools are easily available and content creators are arbitrarily using the technology already in their work. “Our goal is to give people a good way to understand how we can do a little experimentation but also be safe,” says Amanda Barrett, vice president of news standards and inclusion, on the formation of AI policy at the Associated Press. “AI will make mistakes and miss things that a human would find relevant – perhaps so much so that it doesn’t save any time,” cautions Wired in its policy. The New York Times has gone a step further and introduced its first editorial director of Artificial Intelligence Initiatives, who is running a small team.

So, for those thinking not using AI may spare them from having to understand it, or vice versa, they are mistaken. As Abbasi sums up: “Know that we are at a strategic inflection point, a Gutenberg printing press moment, where if we (the media industry as a whole) don’t incorporate it, competitors will gain a competitive advantage over you.”

Dawn.com’s AI Policy can be viewed here.

Zahrah Mazhar is Deputy Editor, Dawn.com. mazharzahrah@gmail.com

Read Comments

What’s Wrong With Good News?

Pakistani newsrooms need to rebalance their quotients of bad versus good news, or risk permanently alienating their readers, argues Zahrah Mazhar.