You must humanise AI text! Why AI-generated copy should only be your notes or (sometimes) first draft

You must humanise AI text! Why AI-generated copy should only be your notes or (sometimes) first draft

An image of a wise-looking elderly man with white hair and beard. He is wearing a green polo shirt and bright yellow cardigan. A cut-away section at the top left of his head shows that his brain has been replaced with robotics. He is standing in his kitchen at home.
AI: it presents like your wise old grandad, but some of the crucial bits are mising.

Do you need to humanise AI text? Yes, yes you do. That’s the short answer, so if that’s all you need, you can now make that mug of coffee and eat a chocolate digestive or whatever you fancy. But if you want some context and rationale, please read on.

Disclaimer: I have just introduced some new services to humanise AI text, services that will localise/sectorise and generally optimise whatever ChatGPT/Perplexity/Claude/insert AI text-generator of choice has given you. So you might say that I have a vested interest. You’re not wrong, but I also have years of experience in editorial. This includes hours spent combing through text with lawyers and other hours checking for plagiarism. So I do know what I’m talking about.

Seriously, unless you’re the type of adrenaline junkie that thrives on exposing yourself to risk, there are many good reasons NOT to publish raw AI copy, or even slightly-tarted-up-so-that-it-reads-a-bit-less-like-a-robot copy. And trust me, hallucinations are only the start. There are many good reasons to use AI copy for note-taking, planning and – sometimes, in cases where thought leadership, insight and originality are really not important – a first draft. After that, you need human intervention.

Here are some reasons why.

Risks of failing to humanise AI-generated text

You may breach copyright

AI text generators are not human and they cannot think in human ways. They cannot think, full stop. That remains true however much AI is personified and presented as your electronic friend with faultless knowledge of everything. Every idea a chatbot gives you is something another (real) person has thought of before you. AI scrapes human-generated content and synthesises it with varying degrees of success. This makes it a minefield for plagiarism and copyright infringement. That’s copyright infringement the legal tort that you can be sued for. The tort you can be sued for even if you didn’t realise you were doing it. That kind of copyright infringement.

If you think I’m overstating this, remember that the BBC is currently threatening Perplexity with legal action for copyright breaches, and multiple publishers are trying to keep AI’s hands off their content.

The fact that AI spits out oven-ready content makes it easier than ever for the unwary to plagiarise other people’s work. Some chatbots are better than others when it comes to providing sources. But how many AI users actually check those sources, make sure they are relevant, authoritative, correct and have been rendered in an original form within the AI-generated text?

I’d be surprised if as much as 10% of people using AI to generate copy routinely do the due diligence required to manage risk. Yet any breach of copyright may land you in court at worst and make you look untrustworthy at best. Either way, you lose.

Humanising – in the form of paraphrasing, referencing and fact-checking your AI-generated copy and working in your own text – drives down the risks. Why would you overlook it?

Incidentally, I’m anticipating the day on which two businesses ask AI the same question and get exactly the same answer (because, why would they not?) If they both publish that copy, unedited, at around the same time, who (if anyone) has breached whose copyright?

Unedited AI-generated copy may damage your brand’s value and profile

I’ve said it above, but this bears repetition: AI text generators are not capable of truly human thought, they simply aggregate, synthesize and regurgitate existing ideas and work. So if you rely on raw data from a chatbot long term, you may well (continually) restate things your audience already knows.

People tend not to particularly respect those who tell them stuff they already know. They are unlikely to consider such people (or businesses) groundbreaking, insightful or clever.

Additionally, because robots are not capable of truly human discernment, AI sometimes gets things wrong. Perhaps more often, AI text generators get things wrong because the question they are actually asked is not quite the question the questioner thought they were asking. Priming a chatbot correctly is a skill.

There are things you can do to fix this, of course. One is to tack on a few original thoughts of your own onto your AI copy. But that means matching the tone of the (probably rather bland) AI copy in a seamless way and that – trust me, I’ve been editing for decades – means rewriting the copy so that everything flows coherently. That takes skill – and time.

However, if you want to stay credible in almost any technical sphere, you will also need comprehensive referencing, or at least source acknowledgement, in your copy.

In other words, you need to humanise your AI text.

AI-generated text can be low quality

SEO company Semrush has recently identified the sources most frequently drawn upon by AI bots. These include multiple public opinion forums like Reddit and Quora. However, as I have pointed out before, these have zero barrier to entry and contributors are self-selected. It’s like asking some random bloke in the pub who keeps banging on about politics for his opinion on international monetary policy. You might get an insightful answer, but what are the chances? And if he does tell you something insightful, how will you know that he’s trustworthy and correct?

Furthermore, as Marlon Horner points out on LinkedIn, manipulating comment sites is incredibly easy. Presumably, this and other deliberate attempts to manage brands via online forums will increase massively now that people know they’re AI sources. Certainly, SEO companies are already telling brands to join Reddit etc. and and spam them with self-interested comments take part.

Disclaimer: I can see that marketers of consumer products will find crowd-sourced opinion useful. If I was selling chocolate biscuits, then – given that my target market could include most people – the opinions of most people, even the uninformed, would be useful. However, for people working in technical and scientific worlds, the direct opposite is true. When expertise and empirical data are your lifeblood, directly quoting Reddit can be bad news indeed.

And then we have the ongoing attempts of the creators of original content to stop AI nicking it. If these (respected, trustworthy) publishers succeed in blocking AI scrapes, what will be the quality of data left behind for the AI bots to eat?

Original copy is more valuable than AI text in technical sectors

Sometimes, however much you try to humanise AI text, AI cannot get you beyond basic research. It may give you context and background, but no more.

For hundreds of years the technical, engineering and scientific sectors have shared information, demonstrated credibility and earned status by publishing and criticising original thought. That’s why academic journals have impact factors. It’s partly why people spend hours on research. Then more hours on writing it up and responding to peer reviews. Then they bend themselves out of shape to meet every last demand of a publication’s editor. In technical spheres, objectivity and creativity informed by expertise really matter.

AI cannot generate this content. An AI text generator is an aggregator, synthesiser and regurgitator. The human-seeming nature of AI outputs is hugely misleading. Your AI text generator cannot think creatively like a person, it cannot come up with the ideas, insights, solutions, abstractions and projections that human beings can. Sometimes, only a human brain will do.

If your brand relies on respect, innovation and expertise, you must regularly publish original thought. That could be white papers, how-to guides, webinars or journal articles. The format is up to you. The important thing is to ignore AI and do it.

Summary

Original content is always the gold standard. Personally, I never use AI to draft my work; I only ever use it for research.

However, in some sectors and for low-impact, high-volume copy, AI-generated text can be useful. Even then, AI copy should be humanised; this minimises risk to the business and safeguards branding and reputation.

For technical, scientific, industrial and manufacturing brands, the use of raw AI copy is fraught with risk. The sectors have to be far more diligent than their mass-market, consumer-focused counterparts. Frankly, if you work in one of these businesses and use AI copy straight from the app I strongly advise you to sit down and have a word with yourself.

After that, please feel free to peruse my services to humanise AI text, and let’s chat.

Comments are closed