Artificial-intelligence writing and editing tools can save time and even improve communication. Still, the results need to be carefully checked, as a recent lessons-learned report from one of our sister national laboratories reveals.

Several of their employees prepared papers with the help of AI tools. Diligent editorial review during that laboratory’s formal Scientific and Technical Information release process revealed that the AIs had put in references that existed but did not correspond to what the authors intended. Further investigation showed that these references were entirely fabricated by the AI tool. Fortunately, this creative writing was caught and corrected before the papers were sent to publishers.

Understanding the risks

Fabrication

AI tools can make things up. In this case, the references existed and had valid Digital Object Identifiers (DOIs), but did not match the listed titles, authors, or publications.

This risk can extend to the body of the report. AI tools are trained on material from the open Internet. This material might contain:

  • Biases
  • Inaccuracies
  • Falsehoods
  • Other limitations inherent in the training data

Hallucination

AIs are known to hallucinate, providing results that closely mimic legitimate information but are not correct.

How to tame your robot

If you use AI tools (such as “Gemini,” part of our suite of Google apps) to create or improve your writing here at Berkeley Lab, we offer these tips. They reflect our stewardship principles, especially integrity, as well as the traditional values and standards of scholarly publishing.

  • Transparently disclose the fact that you used AI, and for what purposes.
  • Inspect the results carefully. This includes following all references to ensure they exist and make sense for your paper. It also includes ensuring factual correctness and attention to nuances of meaning and tone.
  • Have a colleague look over your paper, and follow other review steps customary in your program, before it is released outside the Lab.

We are accountable for the integrity of the scientific and technical information that we release.

Taking these measures upholds another of our stewardship principles, trust—in this case, trust by users of the information.

All of these, of course, are good ideas whether or not you use AI, but are especially important when using these tools.

This article is based on an All-to-All presentation by ATAP EH&S Coordinator Aaron Potash and the lessons learned/Best Practice Briefing, Ensure Responsible Use of Artificial Intelligence Tools in Technical Documentation and Research Outputs (INL).

Both the article and the presentation were written and reviewed exclusively by actual humans.

 

 

For more information on ATAP News articles, contact caw@lbl.gov.