Inproceedings

Do {LLM}s Plan Like Human Writers? Comparing Journalist Coverage of Press Releases with {LLM}s

Authors

Spangher, Alexander and Peng, Nanyun and Gehrmann, Sebastian and Dredze, Mark

Book Title

Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Year

2024

Pages

21814--21828

Publisher

Association for Computational Linguistics

DOI

10.18653/v1/2024.emnlp-main.1216

Abstract

Journalists engage in multiple steps in the news writing process that depend on human creativity, like exploring different ``angles'' (i.e. the specific perspectives a reporter takes). These can potentially be aided by large language models (LLMs). By affecting planning decisions, such interventions can have an outsize impact on creative output. We advocate a careful approach to evaluating these interventions to ensure alignment with human values.In a case study of journalistic coverage of press releases, we assemble a large dataset of 250k press releases and 650k articles covering them. We develop methods to identify news articles that {\_}challenge and contextualize{\_} press releases. Finally, we evaluate suggestions made by LLMs for these articles and compare these with decisions made by human journalists. Our findings are three-fold: (1) Human-written news articles that challenge and contextualize press releases more take more creative angles and use more informational sources. (2) LLMs align better with humans when recommending angles, compared with informational sources. (3) Both the angles and sources LLMs suggest are significantly less creative than humans.

BibTeX Citation

@inproceedings{spangher-etal-2024-llms,
    title = "Do {LLM}s Plan Like Human Writers? Comparing Journalist Coverage of Press Releases with {LLM}s",
    author = "Spangher, Alexander  and
      Peng, Nanyun  and
      Gehrmann, Sebastian  and
      Dredze, Mark",
    editor = "Al-Onaizan, Yaser  and
      Bansal, Mohit  and
      Chen, Yun-Nung",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-main.1216/",
    doi = "10.18653/v1/2024.emnlp-main.1216",
    pages = "21814--21828",
    abstract = "Journalists engage in multiple steps in the news writing process that depend on human creativity, like exploring different ``angles'' (i.e. the specific perspectives a reporter takes). These can potentially be aided by large language models (LLMs). By affecting planning decisions, such interventions can have an outsize impact on creative output. We advocate a careful approach to evaluating these interventions to ensure alignment with human values.In a case study of journalistic coverage of press releases, we assemble a large dataset of 250k press releases and 650k articles covering them. We develop methods to identify news articles that {\_}challenge and contextualize{\_} press releases. Finally, we evaluate suggestions made by LLMs for these articles and compare these with decisions made by human journalists. Our findings are three-fold: (1) Human-written news articles that challenge and contextualize press releases more take more creative angles and use more informational sources. (2) LLMs align better with humans when recommending angles, compared with informational sources. (3) Both the angles and sources LLMs suggest are significantly less creative than humans."
}