Summarizing complex scientific findings for a non-expert audience is one of the most important things a science journalist does from day to day. Generating summaries of complex writing has also been frequently mentioned as one of the best use cases for large language models (despite some prominent counterexamples).

With all that in mind, the team at the American Association for the Advancement of Science (AAAS) ran an informal year-long study to determine whether ChatGPT could produce the kind of “news brief” paper summaries that its “SciPak” team routinely writes for the journal Science and services like EurekAlert. These SciPak articles are designed to follow a specific and simplified format that conveys crucial information, such as the study’s premise, methods, and context, to other journalists who might want to write about it.

Now, in a new blog post and white paper discussing their findings, the AAAS journalists have concluded that ChatGPT can “passably emulate the structure of a SciPak-style brief,” but with prose that “tended to sacrifice accuracy for simplicity” and which “required rigorous fact-checking by SciPak writers.”

Read full article

Comments


From Ars Technica - All content via this RSS feed