HomeTech and GadgetsArtificial IntelligenceOur Obsession with AI Continues as ChatGPT Dominates Discussions in Academia and...

Our Obsession with AI Continues as ChatGPT Dominates Discussions in Academia and Business

A Bloomberg Tech Daily column today describes the arrival of ChatGPT and its AI impact as “the only place where, today, gravity doesn’t apply.” AI is to new business startups what cryptocurrency and the blockchain were in recent times to finance. Bloomberg notes that companies are baptizing themselves in AI waters these days. If AI isn’t mentioned in a press release then likely it will not appear on the next day’s business pages or news feeds.

Scientific and Medical Journals Respond to ChatGPT

The kerfuffle over ChatGPT’s use has roiled the waters of science and technology publications. In an editorial appearing on January 31, 2023, the Journal of American Medicine (JAMA) and the JAMA Network, addressed the implications of non-human authorship and ChatGPT specifically.

In instructions to author submissions, JAMA asked for full disclosure on any content created where a tool to assist in the writing was used. (Does that include spell and grammar checkers?)

JAMA asked that the information regarding ChatGPT use should appear in the acknowledgements and describe how the content was created or edited. If another language modelling tool was used, it should also be revealed.

Having said that JAMA’s publisher noted that submissions created using AI should be discouraged. Going beyond ChatGPT, it also discouraged the use of AI tools to produce images appearing in submitted papers. It concluded that AI tools are “transformative, disruptive technologies” that “create promise and opportunities as well as risks and threats for all involved in the scientific enterprise.” Obviously, the perceived threat in JAMA’s estimation outweighed the promise.

The journals Science and Nature have also introduced rules for the use of generative AI with ChatGPT mostly in mind. Science warns researchers that they could be charged with professional misconduct if they submit manuscripts that use ChatGPT or any other large language AI model.

In its editorial policies, Science states “text generated from AI, machine learning, or similar algorithmic tools, cannot be used in papers published in Science.” It doesn’t even want to consider crediting ChatGPT in authorship. Nature states it will not accept any paper that lists ChatGPT or other AI software as contributors. It goes on to describe the need for full disclosure when using any large language model (LLM) tool. Both Science and Nature describe the use of ChatGPT in works submitted as “plagiarism” noting the need for full accountability based on the principle that all submissions must be human-created original works.

Are Governments Missing in Action on ChatGPT?

Is ChatGPT among other technological and disruptive innovations outracing the guardrails of regulatory oversight? No government agency or committee in the U.S. and Canada today is looking at the implications of ChatGPT entering the public realm.

In October of last year, the U.S. Office of Science & Technology Policy issued its blueprint for an AI Bill of Rights. OpenAI hadn’t yet revealed ChatGPT to the public. Hence, nothing in this Bill of Rights references large language models. The document acknowledges that unchecked algorithms can be problematic and focuses on:

  • ensuring that AI usage is safe and effective,
  • that its usage doesn’t lead to discriminatory acts against people,
  • that the data it parses isn’t used for abusive purposes,
  • that when it is used by designers there must be full disclosure,
  • and that if used, there should be opting-out provisions.

I think in light of ChatGPT the Bill of Rights needs to be updated.

In a very different type of response to the arrival of ChatGPT, an article in the January 26, 2023 issue of The Conversation, asked if an outright universal LLM ban makes sense. It posed a question. Couldn’t the incorporation of this type of tool enhance the quality of academic papers and journalism? The article suggested that a ChatGPT or other LLM could be used to generate acceptable content to help organize and explain material more cogently than a human author alone. It went on to state that with the arrival of AI LLMs “there’s no putting the genie back in the bottle” so why not embrace the technology rather than rejecting it outright? It argued that a tool like ChatGPT “could help democratize the research process.”

Detecting ChatGPT in Authored Works

The Register took a different stance in an article it published this week that posed the question, “Can editors and publishers detect text generated by LLMs?”

Coming up with an AI LLM detector has become the goal of the editors of both Science and Nature, as well as many universities, colleges, and high schools. In the case of the two aforementioned journals, the editors have invited software tool creators to send them their LLM detection software. They shouldn’t have to wait long because even the creator of ChatGPT, OpenAI has built an LLM detector that although not 100% effective, works fairly well. Joining OpenAI are many others in business and academic circles.

The editors of Science and Nature remain adamant in their opposition to ChatGPT. They will not accept that authors require an AI to write papers and articles for their publications. Machines have their place. They are useful as tools for formulating hypotheses and designing experiments to test them. Machines can compute large amounts of data and can help parse the results. But without exception, machines shouldn’t become wordsmiths and write the copy. That the editors say is the exclusive purview of human authors.

I only have one comment for the editors at Science and Nature. I read many of the papers that appear in your publications and I only wish that the authors of them could learn to be good writers. I don’t single out these two publications for being at fault. Because science and technology journals today seem to be filled with content seriously lacking in quality prose. The writing is jargon rich, awkward, and sometimes unintelligible. I don’t know what this says about the peer review process and often wonder if the reviewers are equally incapable of writing a coherent sentence let alone a paper.

lenrosen4
lenrosen4https://www.21stcentech.com
Len Rosen lives in Oakville, Ontario, Canada. He is a former management consultant who worked with high-tech and telecommunications companies. In retirement, he has returned to a childhood passion to explore advances in science and technology. More...

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Popular

Recent Comments

Verified by ExactMetrics