Check out all the on-demand sessions from the Intelligent Security Summit here.
A machine learning conference debating the use of machine learning? While that might seem so meta, in its call for paper submissions on Monday, the International Conference on Machine Learning did, indeed, note that “papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless these produced text is presented as a part of the paper’s experimental analysis.”
It didn’t take long for a brisk social media debate to brew, in what may be a perfect example of what businesses, organizations and institutions of all shapes and sizes, across verticals, will have to grapple with going forward: How will humans deal with the rise of large language models that can help communicate — or borrow, or expand on, or plagiarize, depending on your point of view — ideas?
Arguments for and against the use of ChatGPT
As a Twitter debate grew louder over the past two days, a variety of arguments for and against the use of LLMs in ML paper submissions emerged.
“So medium and small-scale language models are fine, right?” tweeted Yann LeCun, chief AI scientist at Meta, adding “I’m just asking because, you know… spell checkers and predictive keyboards are language models.”
Intelligent Security Summit On-Demand
Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.
And Sebastian Bubeck, who leads the ML Foundations team at Microsoft Research, called the rule “shortsighted,” tweeting that “ChatGPT and variants are part of the future. Banning is definitely not the answer.”
And Ethan Perez, a researcher at Anthropic AI, tweeted that “This rule disproportionately impacts my collaborators who are not native English speakers.”
Silvia Sellan, a University of Toronto Computer Graphics and Geometry Processing PhD candidate, agreed, tweeting: “Trying to give the conference chairs the benefit of the doubt but I truly do not understand this blanket ban. As I understand it, LLMs, like Photoshop or GitHub copilot, is a tool that can have both legitimate (e.g., I use it as a non-native English speaker) and nefarious uses…”
ICML conference responds to LLM ethics rule
Finally, yesterday the ICML clarified their LLM ethics policy:
“We (Program Chairs) have included the following statement in the Call for Papers for ICML represented by 2023:
Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.
This statement has raised a number of questions from potential authors and led some to proactively reach out to us. We appreciate your feedback and comments and would like to clarify further the intention behind this statement and how we plan to implement this policy for ICML 2023.
● The Large Language Model (LLM) policy for ICML 2023 prohibits text produced entirely by LLMs (i.e., “generated”). This does not prohibit authors from using LLMs for editing or polishing author-written text.
● The LLM policy is largely predicated on the principle of being conservative with respect to guarding against potential issues of using LLMs, including plagiarism.
● The LLM policy applies to ICML 2023. We expect this policy may evolve in future conferences as we understand LLMs and their impacts on scientific publishing better.”
The rapid progress of LLMs such as ChatGPT, the statement said, “often comes with unanticipated consequences as well as unanswered questions,” including whether generated text is considered novel or derivative as well as issues around ownership.
“It is certain that these questions, and many more, will be answered over time, as these large-scale generative models are more widely adopted,” the statement said. “However, we do not yet have any clear answers to any of these questions.”
What about use of ChatGPT attribution?
Margaret Mitchell, chief ethics scientist at Hugging Face, agreed that there is a primary concern around plagiarism, but suggested putting that argument aside as “what counts as plagiarism” deserves “it’s own dedicated discussion.”
However, she rejected arguments that ChatGPT is not an author, but a tool.
“With much grumpiness, I believe this is a false dichotomy (they are not mutually exclusive: can be both) and seems to me intentionally feigned confusion to misrepresent the fact that it’s a tool composed of authored content by authors,” she told VentureBeat by email.
Moving on from the arguments, she believes using LLM tools with attribution could address ICML concerns.
“To your point about these systems helping with writing by non-native speakers, there are very good reasons to do the opposite of what ICML is doing: Advocating for the use of these tools to support equality and equity across researchers with different writing abilities and styles,” she explained.
“Given that we do have some norms around recognizing contributions from specific people already established, it’s not too difficult to extend these norms to systems derived from many people,” she continued. “A tool such as ChatGPT could be listed as something like an author or an acknowledged peer.”
The fundamental difference with attributing ChatGPT (and similar) is that at this point, unique people cannot be recognized — only the system can be attributed. “So it makes sense to develop strategies for attribution that take this into account,” she said. “ChatGPT and similar models don’t have to be a listed author in the traditional sense. Their authorship attribution could be (e.g.) a footnote on the main page (similar to notes on affiliations), or a dedicated, new kind of byline, or <etc>.”
Grappling with an LLM-powered future
Ultimately, said Mitchell, the ML community need not be held back by the traditional view of authors.
“The world is our oyster in how we recognize and attribute these new tools,” she said.
Will that be true as other non-ML organizations and institutions begin to grapple with these same issues?
Hmm. I think it’s time for popcorn (munch munch).