Skip to main content

Will AI Completely Replace Human Writers?

Chatbots are already transforming writing. The writer’s strike in Hollywood was, in large part, a reaction to AI-generated writing. Novels can be written in days using AI and are being sold on Amazon. (Though Amazon is taking steps to reduce the number of AI-produced books sold). YouTube videos can be generated in minutes instead of days.

So, what can writers and content creators do?  Are writers entirely out of a Job? Will they ever be completely replaced by AI? Is the hype real, or is this a complete non-issue? 

I believe that the truth is somewhere in the middle. For me, I prefer to watch YouTube channels that humans present.  Even the videos that I watch about AI have human presenters.  I don’t know about you, but as soon as I recognise an AI voice, I stop watching unless the AI voice is particularly realistic. And it helps if the subject matter is something I really want to watch. I also do a lot of skimming. Especially when a video has a clickbait title. I suspect that I am not alone. Like most things, the cream rises to the top.  I’m convinced that there will always be a role for humans, even if they use AI to generate content. This is due to several flaws in current AI systems. The current systems use deep learning, and there are limitations for Deep Learning. Here are the main ones.

Limitations of Deep Learning Models

  • They require access to countless web pages 
  • They lack real understanding
  • They are biased by the worldview of their programmers and the data they are trained on
  • They are a black box
  • They require vast amounts of power and memory

The AI uses millions of pictures to learn what a cat looks like. But the more pictures used, the more accurate the AI will be.

It is easy to confuse an AI. For instance, when you ask an AI question requiring an emotional response or access to your personal details, it will not be able to answer. 

It may give you general advice but not the same kind of advice that a good friend might.

No one is free from bias, including the programmers who wrote the system. Elon Musk is setting up x.ai to counter that, but as much as I admire Elon for some of his achievements, I don’t agree with him on everything, and his own bias will affect the result. 

No one completely understands what is going on inside a Deep learning system. Developers might understand the principles, but they could not tell you why a complex model chose one answer instead of another. (Though there are current attempts to change that).

Bing Chat informed me that the computing power required for ChatGPT was 3640 PF days. (That was based on a Teller Report article from Jan 2030, so it could well be out of date – the point is, that is a LOT!)

So, I asked Bing Chat to explain that in everyday terms.

To put it in perspective, 1 PF-day is equivalent to performing one quadrillion calculations per second for an entire day. So, 3640 PF-days means that the data centers running ChatGPT have collectively performed an enormous number of calculations—around 3640 quadrillion—over the course of several days.

A petaflop is one quadrillion flops per second. A high-end laptop might do billions of calculations a second, but a quadrillion is a number with one followed by fifteen zeros. And the human brain is not designed to comprehend numbers that large. Bing gave me this analogy, 

One billion would be equivalent to one kilometre, which is a distance you can walk down the street.

One quadrillion would be equivalent to 1 million kilometres, which is approximately 25 times around the Earth or almost as wide as the Sun.

Now, to address the elephant in the room. I have been using Bing Chat to fact-check and gather information for this video that discusses AI’s limitations and tells you why it will never completely replace humans. Ironic, right? 

That leads to my point about the truth being somewhere in the middle. AI will be a great tool. (I use it for fact-checking, to do research and as a grammar checker.) And AI will only get better.

The more it is trained, the more accurate it will be. I expect to have my mind blown. But I don’t expect AI to ever truly become conscious or for us to be certain if it ever did. Consciousness is a subjective experience. To know for certain, it would need to become objective. I don’t see that happening. 

In the next article, I want to compare AI-generated writing with Hemingway.

If you want to support me, check out my Redbubble page

\"\"