AI and ChatGPT: 3 challenges in content writing

by | Jul 13, 2023 | AI, Strategy

Unless you’ve been living under a rock for the past six months, the chances are you’ve heard about ChatGPT and other generative AI tools.

Maybe you’ve tested them out yourself – either for fun, to see what they can do, or to investigate the practical applications of AI in your own work. Perhaps you’ve already started using tools like ChatGPT to help you with content creation, generating blog post drafts or social media posts.

We’ve been testing the capabilities of AI ourselves on the PracticeWeb content team.

What we’ve found is that while there are practical ways ChatGPT can be used to support and streamline the content writing process, there are also some flaws to watch out for when you’re using it.

As a general rule, human involvement is still essential in content creation. While AI tools can support the writing process, they can’t do everything a writer can.

Accuracy, knowledge and ‘hallucinations’

While ChatGPT is trained on a large amount of data, it doesn’t have in-depth knowledge on every topic. And while OpenAI updates its datasets regularly, the data GPT-3 was trained on is only accurate up to September 2021, so its responses relating to events after this date may be factually inaccurate. 

To make things harder, it’s good at sounding convincing even when it’s wrong, with many users reporting so-called ‘hallucinations’: confident responses that are factually incorrect.

Back in December 2022, moderators of the programming Q&A website Stack Overflow decided to temporarily ban ChatGPT-generated answers, with the following explanation:

“The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce.”

This highlights the importance of thoroughly accuracy-checking anything ChatGPT produces before using it – and because it doesn’t cite its sources, this isn’t always quick or easy to do. Whoever is checking the content needs to have a good level of subject knowledge, and the time and attention to analyse the factual accuracy of the piece.

Generic, stock responses

Again, ChatGPT doesn’t ‘think’ in the same way a human does. So while it might produce a coherent, objective summary of a topic, it can’t add a unique perspective or a personal opinion. 

As a result, its outputs on complex or nuanced topics can be generic and at times bland. In response to the question ‘Is MTD good for accountants?’, for instance, ChatGPT answered:

“Making Tax Digital (MTD) can be advantageous for accountants by streamlining processes, providing real-time information, improving accuracy, and strengthening client relationships. However, challenges include software adoption, training costs, data security, client readiness, and increased workload during the transition. Accountants should carefully evaluate their circumstances to leverage the benefits of MTD while mitigating potential challenges.”

While this could be a good answer if you were looking for a balanced and objective view on the topic, it wouldn’t be as useful if you were hoping to spark debate or tell your audience something new. 

The most engaging content tends to be the kind that provokes an emotional response or establishes a human connection. Readers are interested in the views, ideas and opinions of people with experience; they respond to stories and anecdotes, and to new takes on topics they care about. 

If you’re trying to establish a strong social media presence or bolster your reputation as a professional, ChatGPT can’t replace the impact of real-life experiences and genuine personal connections.

Similarly, it might not be able to perfectly replicate the tone that you or your colleagues would use in communications, or the overarching voice of your brand. And in sensitive contexts, it might not provide an appropriate response. You can feed it prompts, instructions and examples to inform the tone to some degree, but in many cases, human judgement is still important.

Ethical questions

A final challenge of working with tools like ChatGPT, which shouldn’t be overlooked, is the bigger picture of the impact of this kind of technology. Although generative AI is expected to help tackle a wide range of scientific and social problems, it has also come under fire for a range of ethical issues.

One of these is plagiarism. Generally speaking, ChatGPT doesn’t copy other sources: it’s trained on a large volume of text data, and it works by predicting the most statistically likely response to a given input, rather than directly lifting any content.

But this isn’t a guarantee, and there’s always a chance it might use the same, or similar phrases to the text it’s learned from – particularly if you ask it about a niche topic with limited information to draw on.

Another potential issue is the creation of biased or offensive content. ChatGPT ‘learns’ from sources that humans have written, so it’s possible it could mimic its biases and prejudices. 

Once again, to combat these challenges, it’s essential to comb through the content and exercise your own judgement on the statements it makes. You can also use plagiarism checkers to detect any copied content.

OpenAI, the company that created ChatGPT, has itself made efforts to build safeguards against harmful content. But the methods used to do so have also been called into question: in January, Time magazine published an investigation into OpenAI’s use of underpaid workers to label toxic and explicit content in order to prevent the system from replicating it.

Then there are environmental issues, as large language models are highly data-intensive. 

Limited research has been carried out on their environmental impact, but it’s been roughly estimated that ChatGPT emits about 23 kg of carbon dioxide a day – more than twice the average emissions of a person in the UK. That doesn’t include the energy involved in other factors, such as training the system or pre-processing the data needed to do so. 

Other researchers have raised concerns around the tool’s water footprint, referring to the consumption of water used for cooling in data centres. One report suggested training GPT-3 alone required some 700,000 litres of clean freshwater.

If generative AI tools continue to become more powerful and more widely used, and if their data usage continues to grow, they could have a significant impact on the planet in the future.

All of these factors should be considered when deciding on the ways you adopt AI as a business. Various bodies are discussing these challenges – for instance, you can find guidance and commentary on ethics and new technology on the ICAEW website.

Final thoughts

We’ve found ChatGPT to be the most useful when it’s supporting the work we do as writers by giving us a starting point: a title idea, a suggested structure, or maybe even a rough first draft. But we’ll never use automatically generated content without first putting it through rigorous editorial processes, to make sure it’s accurate, engaging and original.

There’s no denying that the recent developments in large language models and generative AI represent a seismic shift in thinking and approach for content professionals and creatives of all kinds.

Who knows what AI will be able to achieve in five years’ time – and what it might mean for the writers of the future – but for now, we’ll be taking advantage of the huge opportunities it offers, while maintaining a critical awareness of its drawbacks and never compromising on quality of work.

If you want to find out more about our tailored content for accountants, get in touch.

Related articles

How to submit award winning award entries

How to submit award winning award entries

Submitting an entry for an award can be both exciting and daunting. However, the ambition is universal: to win. Success can provide not just accolades but also a platform for further opportunities.

In this blog, we go through what you should consider when submitting an award entry and how to maximise the possibility of your success.

read more