top of page
Search

Chat GPT - a publisher’s friend or foe?


Yellow and green image of a message bubble

In my last blog, I wrote about AI’s ever-emerging presence in the publishing industry. Fast forward six weeks and ChatGPT is taking over the headlines (and the world, it seems). Obviously, my interest lies in the impact it will have on publishers and authors, especially in Siliconchip’s area of expertise - academic journals.


Having used the tool and read a lot about it, I’ve come to the conclusion that, for now, it has some uses but should be used with caution.


Can AI be held responsible for the content it produces?

This is a question I hadn’t thought much about until recently, but was thrown into the spotlight when several published papers listed ChatGPT as an author, including an editorial published in a healthcare paper.

At first, you might think that it wouldn’t necessarily be an issue, But the crux is, being named as an author in an academic journal means you are directly responsible for the accuracy and integrity of your contribution to the output.

If ChatGPT’s contribution is found to be inaccurate, how can an algorithm be held responsible?

Many major publishers are updating their license and editorial policies to state that text generated by ChatGPT (or any other AI tools) cannot be used in the work.


What about using it for research?

There are other AI tools that specifically assist with peer reviews and research for authors, the first of which I hope will see a broader range of authors published. But in terms of ChatGPT, it’s limitations are made clear:


I’m not sure how researchers could knowingly use this method, given the first point. Scientific research, especially where people’s health is involved, cannot be left to occasionally incorrect, harmful or biased information.

Some major publishers also refer to research use of AI tools or large language models in their policies—stating their use should be documented in the methods or acknowledgements section of any paper submitted.


AI is only as good (or as legal) as the information fed into it

There are also ethics to consider. While the URL includes the words open AI, the code does not appear to be open-sourced and has no peer-reviewed scientific paper - which has been customary of other models of this type. To deliver, ChatGPT uses over 300 billion words systematically scraped from the internet: including personal information, some of which was obtained without consent.

I have always backed innovation, I believe it brings endless possibilities for humanity. But like all technological advancements, it will have its positives and negatives. And it will take us a while to figure out if ChatGPT is friend, foe, or something in the middle.


Photo by Volodymyr Hryshchenko on Unsplash


bottom of page