16

If I use ChatGPT to generate the initial prose for, say, a 2 paragraph introduction (where I modify and rephrase a little) do I need to cite it or somehow give credit?

This article implies "no" World's largest academic publisher says ChatGPT can't be credited as an author.

Does anyone have specific guidance, perhaps from a publisher?

JRE
  • 3,245
  • 12
  • 19
CJ Cornell
  • 446
  • 1
  • 4
  • 12

10 Answers10

29

It looks like you can't use it at all, to be honest.

From the OpenAI Terms of Use

You may not: ... (v) represent that output from the Services was human-generated when it is not;...

Given that Springer (from the link in your question) says you can't credit ChatGPT as an author and OpenAI says you have to be up front that the text was generated by ChatGPT, I'd say you're going to have a hard time getting both requirements together.

The link to the ACL that Franck Dernoncourt posted puts a lot of requirements on the use of ChatGPT for an ACL conference. Basically, the ACL page says "Don't use it. If you must use it, consider all these requirements and how you will make sure they are met before you try to convince us that your use of ChatGPT is merited."

The ACL site mentions some cases where it is OK to use text tools - but it does not put ChatGPT in that category. It also mentions all the ethical and legal doubts surrounding the use of ChatGPT.


Why bother with ChatGPT? You'll have to go through any number of iterations before it tosses out something you like, edit its output, clean it up, and check it for plagiarism (there's always the chance that it'll reconstitute some exact piece of text from the stuff it analyzed.)

Using ChatGPT will simply be more work for a questionable gain. Write your text yourself. Then you can be sure that the text says what you meant and that it won't accidentally plagiarise someone else's text.

JRE
  • 3,245
  • 12
  • 19
19

The article you linked to is rather misleading, in fact the guidelines direct from Nature say this:

First, no LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.

Second, researchers using LLM tools should document this use in the methods or acknowledgements sections. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM.

Therefore if used they should be cited as per the second piece of guidance here if published in any of Springer's journals. Other journals may have different guidelines.


Note: I am deliberately not commenting here on whether I think such use is appropriate or useful, since that is a different question from the one asked here.

9

Academically... No. You cite the work of people, you can't plagiarise an algorithm. You wouldn't cite a piece of software that did a linear regression for you. ChatGPT is essentially the same, just with a few orders of magnitude more stats going on.

From a ChatGPT License perspective, you're not allowed to represent its output as human made. However, whether or not you think this license clause is enforceable is a judgement call for you to make.


There is a slight complication in that ChatGPT has been trained on millions of lines of other people's work. It may, at times, regurgitate someone else's work verbatim, at which point you have accidentally plagiarised that original work. Further to this, there are cases going through the courts at the moment that argue that these models, and any output from them, de-facto breach the copyright of any unlicensed training material.

ScottishTapWater
  • 667
  • 4
  • 10
4

do I need to cite or somehow give credit?

No, except if your jurisdiction or your publisher requires it.

From OpenAI FAQ: "Subject to the Content Policy and Terms, you own the output you create with ChatGPT, including the right to reprint, sell, and merchandise – regardless of whether output was generated through a free or paid plan."

if anyone has specific guidance - perhaps from a publisher?

E.g. see the ACL 2023 policy: https://2023.aclweb.org/blog/ACL-2023-policy/.

Note that ChatGPT may plagiarize content.

Franck Dernoncourt
  • 141
  • 1
  • 2
  • 5
3

I am and have been on editorial boards of scientific publications (journals and books). The policy there is quite clear: Texts produced with the help of AI bots are unfit for publication. If not mentioned and found out, authors have plagiarized and will be blacklisted.

Adam Bent
  • 31
  • 1
3

Citations, according to the UNC-Chapel Hill University Libraries, serve three major roles in scholarly work: showing how an argument is built on other ideas; indicating which ideas were taken from others and giving due credit; and allowing the reader to track references.

What you would need to do to allow these objectives with LLM output is cite the authors of the text corpus that the LLM was trained on, from which it is (in an mechanistic and non-comprehending fashion) taking ideas. As others have noted, you don't cite an algorithm (though one should specify what algorithm and implementation was used in a research paper); but an LLM isn't merely an algorithm, it's tons and tons and tons of training data.

If you borrow an idea from something I post to the web, as long as you cite me, all's well and good. But when an LLM is trained on something I post, then blends and digests and composts it and spits it out for you, it doesn't tell you that it came from me.

The fact that you cannot trace the idea back and cites its source makes such tools unsuitable for writing anything intended to be intellectually rigorous.

Tom Swiss
  • 31
  • 1
2

Always, as a rule of thumb, cite your source if it didn't come from you, period. (Purdue University, Purdue Online Writing Lab, College of Liberal Arts.)

F1Krazy
  • 11,447
  • 4
  • 38
  • 69
1

What can you cite? The question posed? The answer varies (by intention) when you repeat the same question. The result reads like human text, but in fact isn‘t.

GPT is a language model, not a knowledge model. It has zero clue about what’s factually right or wrong. That‘s why results often sound reasonable for a layman in its field, but hair-raising for the expert, or non-compilable in the case of code created. I.e. GPT tries outputting, even as a kind of hallucination (invented content).

So what can you cite?

Jurisdiction will change here, certainly. So far, personal rights are for persons, not machines.

It may be a choice to put a footnote, stating "these paragraphs were created by ChatGPT 3.5 from the question …". However, it‘s almost always a good idea to just copy the result. E.g. readers will notice the variation in style, the way things are said and expressed.

So I suggest to use such results as an inspiration to formulate your own thoughts the way you do. Then the citation problem disappears and your text will be more coherent.

F1Krazy
  • 11,447
  • 4
  • 38
  • 69
MS-SPO
  • 394
  • 1
  • 12
-1

I found this article: Do you need to cite chat gbt?.

It basically outlines that you if you use ChatGBT, you should be citing it to give transparency and credibility to your work.

-1

I'm no expert academically speaking. I'm still currently a student. From my point of view, and after reading all the comments, I say yes you can.

Actually I'm going to check with my advisor regarding this tomorrow. Why can't we?

The purpose of AI is online source gathering, what I'm doing is I'm asking ChatGPT to sort citations and references. Also, I recommend to be specific in your question (you have the ownership of the question, it's your logical thinking process. I consider it as an interview for example.)

At the end it's where and why you are using this info for your analysis, reference check, and credibility.

It there wasn't a way to credit ChatGPT (as a search engine) well it should be. AI is out there to make our lives easier, and improve our performance. At the end you need to do the work, in the execution phase.

Toby Speight
  • 259
  • 2
  • 10
Ghina
  • 1
  • 1