ChatGPT and its artificial intelligence brethren are the latest technological advancements to rumble through local newsrooms.
The so-called chatbot — the GPT stands for “generative pre-trained transformer” — is a free tool that answers in a matter of seconds questions users ask it, and it can do some eye-popping things. For instance, the latest system “can figure out tax deductions and answer questions like a Shakespearan pirate,” The Associated Press reported, “but it still ‘hallucinates’ facts and makes reasoning errors.”
Colorado journalists and others have been writing about how these AI chatbots are a college student’s new “favorite helper,” how schools are “bracing for artificial intelligence,” and how “love it or hate it, it’s here to stay.” Meanwhile, Casey Fiesler, a technology ethicist at the University of Colorado-Boulder, has been quoted saying people can program generative AI to easily spread disinformation. (ChatGPT, created by the OpenAi research lab, has also entered my financial portfolio; I own stock in Microsoft that has pumped billions into the tool, and I invest in ETFs that also likely benefit from AI holdings in other capacities.)
As for journalists using the chatbot and AI themselves, some newsroom leaders in Colorado see it as potentially useful. Others don’t seem to want their journalists anywhere near it when it comes to producing work.
“It seems clear to me that there could be uses for it down the road, in conjunction with a journalist who would fact check and direct,” says Larry Ryckman, founder and editor of The Colorado Sun, which has not yet developed a policy around the use of AI in the newsroom. “I think we’re being naive to think that it will never play a role in newsrooms. What I do think is that we have to find a way to use it responsibly and with integrity.”
Others have already drawn a line.
“Our ethics policy calls for our work to be original and our own,” says Kevin Dale, executive editor at Colorado Public Radio. “ChatGPT doesn’t fit that. We’ve talked at a staff meeting [about how] using it to report or produce a story is outside of that.”
Quentin Young, editor of the nonprofit digital news site Colorado Newsline, said its umbrella network States Newsroom “has issued no guidance” on AI chatbots. Nor has Colorado Newsline itself, Young said, “because we expect our journalists to be journalists.”
Management at Rocky Mountain Public Media is currently drafting policies for how journalists use AI, but currently, “we don’t use ChatGPT for any of our journalism,” says its president and CEO, Amanda Mountain.
Boulder Reporting Lab has started to explore the potential capacity of generative AI to help create “efficiencies and capacity-building in our newsroom,” says founder Stacy Feldman. “We have not used it for editorial production at all — and it’s not a reporting tool,” she said. The nonprofit newsroom has dabbled with using it to create headlines and summarize curated content, though. “But even that requires a layer [of] human fact-checking,” she said. Feldman says she is personally excited about the prospects of the new technology “given how many inefficiencies exist in news production.”
When ChatGPT first hit the media scene in Colorado, some journalists put it to use to explain how this new and startling technology works.
Vince Bzdek, editor of The Colorado Springs Gazette, used ChatGPT to help write a newspaper column. Alayna Alvarez at Axios Denver tested ChatGPT’s Colorado knowledge. Kristen Mohammadi at The Aspen Times asked it questions about Aspen and then judged its results.
Reporter Ernest Luning at Colorado Politics, however, took ChatGPT to an entirely different level. He used it to create a fictitious “artificial intelligence-fueled candidate” named Taylor Brown to run in the crowded Denver mayor’s race. Through a “series of chat sessions,” the fake candidate “acquired a distinctive voice and appeared to grow more emboldened, soon replacing an initially bland manifesto with bold policy solutions that wouldn’t sound out of place if they were released by most of the leading human mayoral hopefuls,” Luning wrote.
Some have wondered about how such a tool could impact the profession more broadly. The News Literacy Project recently produced a podcast titled “Will chatbots change how journalism is practiced?” On the show, Madhumita Murgia, who serves as the artificial intelligence editor at the Financial Times, said she has seen how it can be a “really great assistive tool, something like an intern” that helps to “draw out information from complex or long documents” and summarize themes and ideas, which can offer a jumping off point to “do original reporting that we expect of journalists.”
In Colorado, Wet Mountain Tribune publisher Jordan Hedberg has said he expected Colorado’s chain-owned newspapers to “employ this more to crank out more content as that is their business model.”
On the other hand, John Rodriguez, the former publisher of PULP newsmagazine in Pueblo, said he believes what OpenAI is doing with its chatbot “will kill legacy media but save local media and it will be a good thing.” He argued in a social media essay that “the backbone of local papers could be either a private or public bot that has the entirety of archives, government meetings, laws, etc that allows for higher level requests and production.”
Even before ChatGPT came on the scene, The Denver Post had experimented with robots helping produce sports coverage; 9NEWS has used a company that creates “automated local stories” to produce items about Denver real estate (and cupcakes). Together, those two outlets make up perhaps the largest online audiences in the state. (It was already in 2014 when The Los Angeles Times broke the news of an earthquake with help from an algorithm called Quakebot.)
Earlier this year, Gina Chua, the executive editor of the global news organization Semafor, published some ideas about how newsrooms might effectively use ChatGPT.
Some might worry the tool and its AI kin could be a threat to journalism jobs, but Chua found there are “useful, here-and-now real world applications that could materially improve how journalism is practiced and created” — and then immediately added, “the statement above might no longer be true.”
After playing around with a chatbot called Claude, created by Anthropic, Chua wrote:
“I’m not suggesting that Claude should be unleashed on stories unsupervised; but if [it] could do a first edit on most of the copy in a newsroom — especially those where the staff are writing in a language which isn’t their mother tongue — it could offer material improvements in quality and efficiency.”
Some local journalists have copped to using it for a tech assist.
Andrew Kenney, a reporter for Colorado Public Radio, said he has used ChatGPT to “code an Excel macro to do something way beyond my capability” and was happy with the result. He added he wasn’t using it for a published story but rather to keep tabs on certain data in a spreadsheet he created.
As a journalism instructor at Colorado College, I have spoken to students about when and where AI chatbots might be useful and when they might pose conflicts or prove detrimental to their work.
For an upcoming intro class, I plan to offer an in-class activity in which I give students a long, rambling, garbled verbatim quote from a source who had witnessed a major vehicle accident and ask them to paraphrase it using partial quotes as if it were part of a news story. (Doing this, I’ve found, is not easy for students who are not used to news writing; they tend to over-quote and lack confidence in paraphrasing.) I’ll offer the same prompt to ChatGPT and then we’ll assess how their work compares to the machine.
This week, I asked ChatGPT to flag any inconsistencies with Associated Press style for news writing in a few writing samples and was surprised at its ability to trouble-spot issues. The chatbot knew to check that Colorado College’s Kathryn Mohrman Theatre wasn’t spelled as “theater,” said “‘fifty-two year old’ should be ‘52-year-old’,” suggested “alumna” instead of “alumni,” and then spat back a corrected draft in seconds. (I wasn’t as impressed when I tasked it with copy editing the lede of a published AP story.)
Asked if journalists at Colorado Public Radio could do something similar to help copy edit a story draft, editor Dale said, “We cannot use it to produce our journalism at this time.” (I’m not sure if what I asked particularly qualifies.)
Joan Donovan, research director of the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School, has advised that newsrooms “have a duty to report when they’re using these tools,” but also acknowledged that it’s “still a very open question as to how.”
ChatGPT is not the only generative AI tool out there — though it gets much of the attention. The ability of some other similar tools are just wild. I imagine we’ll see shifting policies in newsrooms going forward about how we use them.
As one longtime journalist and news manager said as I was reporting this: “boy is it moving quickly.”
Corey Hutchins is co-director of Colorado College’s Journalism Institute, reports on the U.S. local media scene for Columbia Journalism Review, and is a journalist for multiple news outlets. Subscribe to his Inside the News newsletter, here.