The first draft of AI policy

As associations and large publishers advocate for AI regulation and compensation, newsrooms take different approaches to AI adoption

Posted

For the Aug. 1, 2023, McKinsey & Company report, “The State of AI in 2023: Generative AI’s Breakout Year,” the publisher surveyed international companies representing an array of industries, including media.

“According to the survey, few companies seem fully prepared for the widespread use of gen AI — or the business risks these tools may bring,” the report concluded. “Just 21% of respondents reporting AI adoption say their organizations have established policies governing employees’ use of gen AI technologies in their work.”

This is one of the dilemmas newsrooms face in the age of AI: First, whether and how to use these technologies, and then how to create some structure and define ethics and policy around their use. The other dilemma is how to protect news publishers' interests through regulatory and fair-compensation advocacy.

AI advocacy

The News/Media Alliance published “AI Principles” in April this year, advocating for fair compensation for news content used to train AI technologies, privacy and copyright protections and for transparency. “Publishers have the right to know who copied our content and what they are using it for,” it reads. “We call for strong regulations and policies imposing transparency requirements to the extent necessary for publishers to enforce their rights.”

In late July 2023, WAN-IFRA, the World Association of News Publishers, announced the formation of a special committee to work on AI regulatory actions. Led by journalist and 2021 Nobel Peace Prize Laureate Maria Ressa, the group of 21 represents 13 countries, with members hailing from academia and professional roles in journalism and tech.

“The committee’s role is to develop a set of principles, rights and obligations for information professionals regarding using AI-based systems,” the announcement read.

“In recent months, media groups published guidelines to steer their use of artificial intelligence. However, given the immense economic incentive to exploit AI for productivity or audience share gains, guidelines are needed to ensure all players adopt a cautious and reasoned approach regarding information integrity,” WAN-IFRA’s authors explained.

The committee plans to publish a report by the end of this year.

In August 2023, press associations and advocates from around the globe published an open letter to international leaders and elected officials, encouraging them to take regulatory action on AI. Among the signatories are Gannett, Getty Images, the National Press Photographers Association, News/Media Alliance, The Associated Press, European Publishers' Council and the European Pressphoto Agency.

The Online News Association published “A Practical Newsroom Guide to Artificial Intelligence” in July 2023, covering ethics and policy; tips for using AI tools like ChatGPT; and a long list of AI-related tools and resources.

The American Press Institute’s Elite Tozer Truong, vice president of product strategy, authored an August series called “Local News and AI,” in which she dived into AI ethics and advocacy. “Is it ethical to use AI? This is the first decision you’ll have to make for your news organization. … In the meantime, we have to contend with this uneasiness and consider how we can contribute to getting licensing in place so local newsrooms receive compensation for AI use,” she wrote.

AI in Little Rock

A few journalists in the newsroom at the Arkansas Democrat-Gazette have been experimenting with generative AI tools, but most remain “leery” of AI, according to Managing Editor Alyson Hoge. She cited inaccuracies, unreliability and an extra layer of due diligence to verify AI-derived content and make sure it’s being used ethically.

When E&P reached out to Managing Editor Alyson Hoge at the Arkansas Democrat-Gazette, to hear how journalists there were exploring AI, she shared our message with the newsroom — some 100 journalists and editors in all — and asked if and how they’ve been using AI. She got surprisingly few responses back. A few said they’d been experimenting with generative AI tools, like OpenAI’s ChatGPT.

One journalist was testing it for things like creating source lists.

“But no one is actively using it as a matter of routine,” she said.

As she sees AI, the primary barriers are bias, inaccuracy and lack of editorial judgment.

“If I were to ask it, ‘How many towns in Arkansas have a population of less than 500? I’m not sure I’d believe it, and I’d want to double-check the information for 100% accuracy,” Hoge said. 

Alyson Hoge, managing editor of the Arkansas Democrat-Gazette (Photo by David Hoge)

If you count the combined time that Alyson Hoge has worked for the Arkansas Democrat-Gazette and the Arkansas Democrat before that, she’s put in 44 years of service. Four decades in news provides her with a long-view perspective on things like technological innovation, resources and cost-cutting, and what it really takes to do the job of good journalism. 

Though AI is touted as a time-saving tool, generative AI technologies insert another layer of due diligence.

“Even if it were to become widely used, you’d have to have a human come behind it and look very closely over the content and double-check everything. And if you have to do that, what's the point of using it?” Hoge said.

She pondered how a chatbot might write an article about a tornado event without somehow plagiarizing other news coverage.

Though there’s no formal “AI policy” in the works at the Arkansas Democrat-Gazette, Hoge said it’s not a bad idea to start discussing it. She said it’s wise for journalists to be cautious and skeptical about AI but acknowledged the fear of being too careful and “left behind.”

“Because just like the internet and with social media, you can end up watching everybody else in the industry jump right into it, charge right ahead, while you risk being left behind,” she said.

In USA TODAY Network newsrooms

At Gannett’s USA TODAY Network newsrooms around the country, AI is already impacting how news is gathered and delivered, and journalists are being trained on how to ethically use AI tools.

Jessica Davis, Gannett senior director, news automation and AI product

“Copyright hazards are one of the many concerns that must be weighed when using generative AI, especially for open-source models,” according to Jessica Davis, Gannett senior director, news automation and AI product.

“It’s important to acknowledge the uncertainty in our industry around AI. At Gannett, AI will not replace journalism or journalists,” Davis remarked. “AI will help journalists and journalism by improving efficiencies for the journalist and personalization for customers.

 “These are tools to increase efficiency and relieve reporters from the more tedious and monotonous work and allow them to focus on generating more content,” Davis continued. “Gannett journalists will have the final say on what is published using generative AI.”

One way they’ll use AI is to summarize stories and extract bullet points from reporting, publishing them at the top of a story. Editors will have the final say on whether the AI-generated output is published or needs revision, but the plan is to go live in Q4 this year, Davis told E&P.

She also noted, “Gannett plans to eventually incorporate that summarization technology into its publishing system.”

In an April 2023 update, Gannett added a section on AI to its “USA TODAY NETWORK Principles of Ethical Conduct For Newsrooms.” It opens with a statement of purpose about why AI policy is necessary: “AI (Artificial Intelligence) is emerging as a helpful tool in publishing. However, before using any AI-generated content, you must discuss the purpose and how it was produced with your editor. This policy aims to provide ethical guidelines for journalists using AI-generated content: whether written, visual or audio, to ensure that their reporting is transparent, accurate, fair and accountable.”

The AI principles include standard-operating procedures for journalists — verifying the accuracy of information, being critical of it and enlisting sound editorial judgment about its use. Gannett journalists are asked to be cognizant of AI’s technical limitations, ethical pitfalls and larger liabilities. 

Some of the guidelines are a bit more complicated: “Journalists must ensure that the use of AI-generated content does not violate the privacy rights of individuals. They must ensure that the data used to generate content is collected and used in compliance with data protection laws.”

Gannett’s principles also stipulate, “Journalists must take responsibility for any errors or inaccuracies in the AI-generated content they use. They must be accountable to their audience and take corrective action if errors are found.”

“AI use in the newsroom is not new,” Davis noted. ”Gannett has been using Natural Language Generation, a subset of AI, for over five years, to aid journalists in stories, like weekly unemployment claims and real estate trends. Audience response has been positive because these types of stories are useful in helping our readers better understand what is happening in their communities.”

Gretchen A. Peck is a contributing editor to Editor & Publisher. She’s reported for E&P since 2010 and welcomes comments at gretchenapeck@gmail.com.

Comments

No comments on this item Please log in to comment by clicking here