Discussing AI in the Newsroom

Community Article Published August 23, 2024

image/png

I was recently interviewed by Tim Kearny from Technopedia about AI in the Newsroom. Excerpt:

Q Tim Kearny, Techopedia: As a journalist and the CEO of a news organization, what role do you think AI should play in the newsroom?

Answering: I’m more of a writer and a product manager than a journalist. HackerNoon publishes all types of blog posts, like op-eds, tutorials, interviews, columns, research papers, and some journalism.

We’re building a community-driven content management system, and there are many places where AI can assist writers, readers, and editors, such as through brainstorming new ideas, fixing grammar, or finding your next relevant story.

Within our text editor, we have a custom ChatGPT layer for rewrites, a handful of image generation models, and we leverage AI to generate summaries for the native character count per distribution channel.

We use AI to make stories more accessible by making more versions of the story; for example we use Google machine learning to translate stories into foreign languages and generate audio versions of the blog post.

As a consumer of news, when it comes to the newsroom specifically I would like journalists to research their stories with whatever the most advanced and relevant search technology or specific methodology that story calls for, but to never fully trust the AI, and always always verify.

Q: So what level of use is acceptable in your view, and what level of transparency should publishers offer readers?

A: It’s not acceptable for content to be presented as human made when it was made by AI. Platforms should do what they can to indicate where and how AI contributed to the experience.

For example, we use emoji credibility indicators to indicate to the reader if AI assisted in the writing of the story. People on the internet should trust the author is who the site says the author is.

Q: Do you see AI-generated news sites as a threat to the future of human-written journalism?

A: There are side effects from the mass production and mass consumption of AI generated content. Deepfakes create billions of views across social media. Platforms are getting better at detecting them and labeling them, but they are just easier than ever to make.

When TUAW recently relaunched with newly AI-generated content and accredited them to real humans that used to write there — it did not go over well. Writers don’t want that misattribution, and for blog posts, readers trust the content more if they trust the human on the other side of the screen.

Many financial websites and tools have been using natural language processing and automation to dish out headlines in seconds because that information is important for investors. It’s a speed and convenience thing vs. slower human input, but it’s been going on for far longer than the generative AI boom we’re currently seeing.

Read the full AI in the Newsroom discussion on Techopedia.