Skip to main content

AI and the Illusion of Superintelligence

This is the Critical Exegesis that I submitted for my Masters of Creative Writing

Generative AI: Separating Fact from Fiction

Les Ey

Abstract

Applications that use Artificial intelligence (AI) in the form of Large Language Models (LLMs), commonly known as chatbots, are having a massive impact on how people create text and digital art. This paper will discuss the materialistic worldview (the view that matter is all there is), its influence on AI, the public perception of it and its implications for the question of what the mind is.

I intend to show that human creativity ultimately comes from a place that is apparently beyond the reach of mathematics and algorithms. AIs may use human data to create, but they only mimic human emotion; they lack the lived experience of human creators. (Cave) My recommended solution is raising awareness. I discuss my intention to raise awareness through fiction that demonstrates the issues. To make my case, I use arguments from Brooks, Searle, Egnor and various other sources.

Introduction

The goal of many AI developers is to create human-level intelligence. While AI can do numerous tasks that many would not have thought possible not so long ago, the success of AI and the public misunderstanding about it is what makes its hype so misleading. (Brooks, Transformers) The AI developers will only have limited success in their goal. A human will always be in the loop; AI will neither lead to a robot apocalypse nor a utopia. (Godhe; Chubb) While it is not an existential threat to humans, it may subtly impact people’s worldviews. My concern is that because the AI is so impressive, people will believe that AI is sentient and buy into the idea that the human mind is just a meat computer as opposed to the dualist view that the mind is more than just physical. I will touch on both sides: the view of those who think that the mind is merely physical, and therefore, superintelligence will be achieved. Why they believe they are right and why I believe they are wrong.

Impact via the Media

The mainstream media goes to AI experts as a source of information for the latest developments. Consequently, the worldview of these experts may come across as they explain how such systems function.

Heimann discusses the computational theory of mind and how it underwrites much of the development of AI today. As Heimann explains, computationalism ‘views the mental world as grounded in a physical system (i.e., computer) using concepts such as information, computation (i.e., thinking), memory (i.e., storage), and feedback.’ (Heimann)

Philosophically, computational theories of mind are based on secular materialism, according to which all reality is ultimately reducible to physical systems that operate according to impersonal laws of nature. I will show how that view can be disputed later. Still, it is apparent that the materialistic view is the dominant view of AI developers, (Brooks, ALIFE) and the broader public, who may accept it uncritically. So, it would appear that materialists are dominating the narrative. If the developers of AI are wrong, then the media will pass on misleading or exaggerated conclusions to the public.

Human Creativity

It is important to consider what human creativity is and then look at where it comes from. Rodney Brooks takes a materialistic view of the origin of human intelligence. Rodney Brooks has spent most of his life thinking about human cognition and artificial intelligence. He is the Panasonic Professor of Robotics (emeritus) at MIT. He grew up in Adelaide and attended Flinders University in South Australia. He earned his Ph.D. in Computer Science at Stanford University and has lectured at several universities. He started several robotics companies. (MIT) If you are tired of the viral videos of cats riding on robot vacuum cleaners, then you can blame him. Not for the videos but for the robots that made them possible. He is currently the CTO and co-founder of Robust AI. These are just some of his achievements. (MIT) He has a CV that would be the envy of most nerds, and like many nerds, he is an ardent materialist. For him, human emotion and consciousness are purely mechanical.

‘So here is the crux of the matter. I am arguing that we are machines and we have emotions, so in principle it is possible for machines to have emotions as we have examples of them (us) that do. Being a machine does not disqualify something from having emotions. And, by straight-forward extension, does not prevent it from being conscious.’ (Brooks Flesh 176)

But even Brooks is not in the superintelligence camp; at least, he does not think it can be achieved this century, if at all.

‘We may be stuck in some weird Gödel-like incompleteness world–perhaps we are creatures below some threshold of intelligence which stops us from ever understanding or building an artificial intelligence at our level. I think most people would agree that that is true of all non-humans on planet Earth–we would be extraordinarily surprised to see a robot dolphin emerge from the ocean, one that had been completely designed and constructed by living dolphins. Like dolphins, and gorillas, and bonobos, we humans may be below the threshold.’ (Brooks Steps)

‘Or, perhaps, as I tend to believe, it is just really hard, and will take a hundred years, or more, of concerted effort to get there. We still make new discoveries in chemistry, and people have tried to understand that, and turn it from a science into engineering, for thousands of years. Human level intelligence may be just as, or even more, challenging.’ (Brooks Steps)

Brooks is not alone in his concerns. Noam Chomsky pioneered the modern understanding of language structure, and LLMs may not have been possible without his work. (Shackell) Shackell discusses Chomsky’s main complaint about systems like ChatGPT.

‘His main complaint is that such systems are a dead end in the search for true artificial general intelligence (AGI). Rather, he views them as a souped-up autocomplete – useful for creating computer code or cheating on essays, but not much else.’

Brooks believes that human intelligence is mechanical, but where does creativity really come from?

Writers have used stories to explore what it means to be human through the ages. Aesop’s fables used animals to explore human nature. In 1818, the Italian author, Carlo Collodi, wrote a story known in English by the title ‘The Adventures of Pinocchio’. Pinocchio is a marionette who wants to become a real boy. In 1818, Mary Shelley’s ‘Frankenstein’ explored what it would be like to create human life. In ‘Star Trek: The Next Generation’, The android Data embodied many of those themes but placed them in a futuristic setting.

Until now, machines have been different from us, and those stories allowed us to explore those differences. But with LLMs in the form of chatbots, the line is getting blurred, at least superficially. But even chatbots must be trained by humans using data initially generated by humans. ChatGPT is a popular LLM produced by OpenAI and is used as the engine for many commercial applications, including Bing Chat. (Boyd) Shreya Johri explains where the data to train ChatGPT came from

‘The training dataset consisted of text collected from multiple sources on the internet, including Wikipedia articles, books, and other public webpages.’ (Johri)

Basically, the data came from us. That’s where AI gets its creativity from, but where does human creativity come from? Is it something that can be computed? If a songwriter or author is asked where their inspiration comes from, they may talk about influences or the mythical Muse. But where do original ideas come from? Ideas that were never tried before? The answer is often that the idea just comes out of nowhere.

‘I never sit down to write. When I’m moved, I do it. I just wait for it to come. You just hear it. I can’t really describe writing. It’s in my head. I don’t think about the styles. I write whatever comes out and I use whatever kind of instrumentation that works for those songs.’ (Kravitz)

So how do you write an algorithm for ‘it’s in my head’? Of course, you can examine the work of various artists and mimic that, and that is basically what LLMs do, but when it comes to something new, a spark of genius, then ‘I can’t really describe writing.’ is not going to help a programmer. As a programmer, I want a clear, detailed list of steps and logic before I start writing a program. I don’t want to hear ‘the melody came to me in a dream.’ (McCartney) I want maths, data and logic. If you want me to write software that copies media from somewhere else, I can do that, but I can’t make a computer have a dream. At least not a dream that is not a random mess of meaningless static.

Materialists attempt to reduce human intellect to a biological function that can be explained by neurons and synapses. Even critics of strong AI, like Chalmers, Penrose and Searle, are materialists; they just question the ability of computers to do what our brain is doing (Brooks, Flesh p181-187)

Brooks spends much of pages 181 to 187 criticising Chalmers, Penrose and Searle. Yet he has issues with computation as a suitable analogy for what the brain is doing. He writes that it might require ‘a few Einsteins and Edisons to figure out a few things we don’t understand’. (Brooks, Flesh p187; Brooks, SPRING).

Materialists reject a non-material source of intelligence. So, is there any scientific reason to dispute them? Neurosurgeon Michael Egnor is adamant that there is. Egnor argues that computation is not semantic, it is not ‘about’ something, but when we think, our thoughts have meaning. Our thoughts are ‘about’ something. (Egnor, Mind Matters).

In a talk he gave, he outlines the work of three people who made discoveries about the mind. He states that Dr Wilder Penfield was a devout materialist who ended his career as a passionate dualist. By stimulating various areas of a patient’s brain, Penfield could produce various responses, such as making the patient’s arm move, but the patient would know that Penfield did it. The patient would never think it was their own choice. So, Penfield was never able to stimulate agency. Penfield also found that there are no intellectual seizures of the brain. No one ever does calculous uncontrollably.

Egnor also mentions Roger Sperry’s work with split-brain patients. It required ‘Nobel prize-winning research’ to find any difference between patients who had the operation and those that don’t. Egnor recounts his experience with his patients who have had the same operation and states that the patients still have a single mind, ‘they don’t have two intellects or wills’.

The other researcher Egnor mentioned is Benjamin Libet. Libet was interested in the question of free will. Libet would ask volunteers to decide to do something like push a button while he measured their brain activity. Libet discovered that a brain wave would fire about half a second before the volunteer was aware of the decision. Materialists interpreted that as proof that we don’t have free will, but Libet asked the volunteers to try something different. To decide to do something, then change their mind and decide not to do it. The decision not to do it did not register as a brain wave. Libet called that ‘free won’t’. (Egnor, YouTube)

The brain is a very complex organ, and there is still a lot to learn about it, but as Egnor has shown, some aspects of the mind don’t fit the materialist worldview.

However, the source of human creativity is a genuine issue for an AI with superintelligence. It is impossible to specify the steps involved if you don’t understand the process you are emulating. It should be clear that Artificial intelligence is artificial. Based on Searle’s Chinese Room analogy, which will be discussed later, it should be clear that LLMs are not doing the same thing the human mind does. Shackell explains Chomsky’s view about neural networks.

‘Above all, he doesn’t believe neural networks (the basis of much of today’s AI) are the correct architecture for replicating human intelligence.’

LLMs rely on human intelligence for training and plugins. Plugins are additional modules that use other methods to perform various functions. LLMs are not conscious, they are not grounded in the real world, and they do not have common sense or genuine understanding. LLMs work with symbols; ‘g0537’ could just as easily represent ‘cat’ to an AI. Brooks explains why this is important.

‘And this is the critical problem with symbolic Artificial Intelligence, how the symbols that it uses are grounded in the real world. This requires some sort of perception of the real world, some way to and from symbols that connects them to things and events in the real world. For many applications it is the humans using the system that do the grounding.’ (Brooks steps)

The AI can’t ground itself in the real world. That doesn’t mean that they are not useful or impressive. They have come a long way in a reasonably short time. However, the idea that they are about to produce superintelligence that will rival human intelligence is mistaken. What they have achieved is not only impressive but also seductive. It is easy to rely on them. The question is, should we? Even the makers of ChatGPT don’t seem to think so. Brooks quotes OpenAI’s System Card Report (p57)

‘As noted above in 2.2, despite GPT-4’s capabilities, it maintains a tendency to make up facts, to double-down on incorrect information, and to perform tasks incorrectly. Further, it often exhibits these tendencies in ways that are more convincing and believable than earlier GPT models (e.g., due to authoritative tone or to being presented in the context of highly detailed information that is accurate), increasing the risk of overreliance.’ (Brooks Transformers)

Brooks goes on to quote them from an earlier System Card Report (p7)

‘In particular, our usage policies prohibit the use of our models and products in the contexts of high risk government decision making (e.g, law enforcement, criminal justice, migration and asylum), or for offering legal or health advice.’

So, it would seem that if your surgeon relies on ChatGPT instead of years of training, you might want to get a new surgeon.

Limitations of AI

Much of the AI hype is based on faith that progress will continue exponentially. That it will eventually become autonomous and train itself to become superintelligent. But Brooks is sceptical about LLMs achieving that.

‘Calm down people. We neither have super powerful AI around the corner, nor the end of the world caused by AI about to come down upon us.’ (Brooks, Transformers)

The following quote comes under the heading ‘ALWAYS A PERSON IN THE LOOP IN SUCCESSFUL AI SYSTEMS’.

‘Many successful applications of AI have a person somewhere in the loop. Sometimes it is a person behind the scenes that the people using the system do not see, but often it is the user of the system, who provides the glue between the AI system and the real world.’ (Brooks, Transformers)

Generative Pre-trained Transformers (GPTs) are the kind of LLM that ChatGPT uses. While Brooks seems to think it will be possible for AI to generate unique text, he doesn’t accept that GPTs will be able to do it without an add-on.

‘When no person is in the loop to filter, tweak, or manage the flow of information GPTs will be completely bad. That will be good for people who want to manipulate others without having revealed that the vast amount of persuasive evidence they are seeing has all been made up by a GPT. It will be bad for the people being manipulated.’ (Brooks, Transformers)

John Searle used The Chinese Room analogy to explain his concerns about ‘strong AI’, namely AI that has genuine understanding.

‘Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.’ (Sanford)

Searle received considerable pushback in response, and the response is telling. In Searle’s opinion, the brain is a machine that thinks, but he disputes that it is a computer. His issue with current computers is that they use a set of rules to process symbols. They have syntax but not semantics (syntax is the set of rules of language, and semantics is the meaning behind the message). He stands by his argument despite the pushback. (Searle) The pushback is understandable for materialists because they are convinced that if the brain is a machine, then machines must be able to produce understanding. Brooks is one of Searle’s critics. Brooks thinks that Searle is confused. (Brooks, Flesh, 176 – 179) Brooks uses what Searle calls ‘the system argument’ to dispute Searle. Brooks believes that the entire system can be conscious. Searle counters the system argument by arguing that he could memorise the rule book but would still not understand Chinese. (Searle) Ironically, Brooks refers to something he calls ‘The Juice’ (Brooks, Lecture, Flesh 188) because of his doubts that current AI models can produce ‘superintelligence’. (Brooks, Transformers) For Brooks, ‘The Juice’ is the unknown idea missing from our current understanding that might require a few Einsteins and Edisons to figure out (Brooks, Flesh 187). He rejects the possibility that ‘The Juice’ is anything special (Brooks, ALIFE) but leans towards a mathematical principle or a new metaphor that will radically change our understanding of intelligence. (Brooks, Flesh 187) The whole controversy hinges on the assumption that intelligence must result from natural processes occurring in a machine called the brain. Searle can see the absurdity of a machine that merely follows syntax having semantics. Brooks can see the absurdity of accepting that the brain is a machine but rejecting that a machine can possess understanding. Could it be that ‘The Juice’ is metaphysical, at least in part?

It should be clear to anyone who has encountered an obvious ‘hallucination’ while chatting with a chatbot, or had it create an amazing image, apart from the beautiful subject having three arms, that LLMs haven’t mastered semantics. They are just matching labels with mostly correct maps of pixels or guessing which word comes next in a complicated version of autocomplete. They don’t do anything on their own initiative. They are programmed, trained or prompted by humans. So the key takeaway is that, at the very least, a different kind of computer that can think the same way humans do, or a different AI model, will be required before they can replace human creators or conceivably be as reliable. But that ignores the possibility that the mind may have an immaterial aspect that can’t be simulated physically. Even if some intelligent programmers discover some enhancements that can reduce the unreliability of LLMs to a negligible percentage, it will still require massive amounts of human-generated data and human trainers. (Brooks, Transformers; OpenAI) The way that neural nets learn is entirely different from how humans do. James Fodor explains…

‘Neural nets are typically trained by “supervised learning”. So they’re presented with many examples of an input and the desired output, and then gradually the connection weights are adjusted until the network “learns” to produce the desired output.

To learn a language task, a neural net may be presented with a sentence one word at a time, and will slowly learns [sic] to predict the next word in the sequence.

This is very different from how humans typically learn. Most human learning is “unsupervised”, which means we’re not explicitly told what the “right” response is for a given stimulus. We have to work this out ourselves.

For instance, children aren’t given instructions on how to speak, but learn this through a complex process of exposure to adult speech, imitation, and feedback.’

Humans require only a fraction of the data to learn. Fodor states…

‘Another difference is the sheer scale of data used to train AI. The GPT-3 model was trained on 400 billion words, mostly taken from the internet. At a rate of 150 words per minute, it would take a human nearly 4,000 years to read this much text.

Such calculations show humans can’t possibly learn the same way AI does. We have to make more efficient use of smaller amounts of data.’

But what if we were to use AI to train AI? Wouldn’t it get better and better until it is infinitely intelligent? Marks elaborates on the work of Shumailov et al.

‘In a recent insightful paper written by collaborators from Oxford, Cambridge and other prestigious institutions, the verdict is in for large language models (LLMs) like ChatGPT. Repeatedly using the output of one LLM to train another results in what the authors call model collapse. … the AI eventually suffers model collapse and becomes a blubbering idiot. The emergence of superintelligence by this repeated process never happens. The opposite does. The result, rather, is super idiocy.’ (Marks)

Even if you reject the possibility of a metaphysical factor to human understanding, the current computers will not lead to superintelligence in the foreseeable future. There must always be a human in the loop. Human creatives still have a role and can use AI as a tool. The hope is they will use it ethically and with caution, but unfortunately, like any technology, if it can be exploited, it probably will be.

Degradation of Voice

Stephen King has a unique writing style. It was so distinct that even when he wrote under a pseudonym, astute readers could tell that King was the author. (King) But what if King started using Bing to speed up his writing? If King asked Bing to write a scene where someone was brutally murdered, Bing would refuse because of its ‘safety instructions’. For example, as an experiment, I prompted Bing with the following, ‘Please write a short paragraph for a novel about a violent murder’. Bing replied, ‘I’m sorry, but I cannot fulfil your request. Writing about violent murders is not aligned with my safety instructions…’ (Bing)

That’s an extreme example. King would quickly find a workaround. But what if it was a less experienced writer, and instead of refusing, Bing generated text that appeared to be reasonable but was too ‘safe’ when the story needed a more intense approach?  Peter Gregory quotes Lewis Wynne-Jones on Forbes on the impact of AI.

‘It will put a greater emphasis on having a “voice.” AI is excellent at putting together cogent, but ultimately lifeless, writing. Authors who have good ideas and execution but lack the human touch will be most impacted by AI-generated writing. As a writer, you should always focus not just on your argument, but what you—the human behind it—bring to the process.’ (Gregory)

I agree. My opinion is that authors should insert humanity into their writing. They shouldn’t just insert emotion; they should insert intelligent emotion. AI might well mimic emotions, but humans understand them. I think we should use AI as a tool, not as a master. Gregory quotes Cristian Randieri.

‘However, this trend may lead to more competition for readers’ attention, and AI-generated content may lose originality and creativity as authors begin to rely too heavily on AI tools.’ (Gregory)

Nick Cave was asked what he thought about a song ChatGPT wrote ‘in the style of Nick Cave‘. His reply was classic. He used a word to describe it that I’m reluctant to quote in a formal paper. But I can quote him as saying, ‘I do not feel the same enthusiasm around this technology’. The one dispute I have with Cave’s response is that he buys the idea that ‘it moves us toward a utopian future, maybe, or our total destruction.’ But Cave is not wrong when he points out that ChatGPT is a mere replication.

‘What ChatGPT is, in this instance, is replication as travesty. ChatGPT may be able to write a speech or an essay or a sermon or an obituary but it cannot create a genuine song. It could perhaps in time create a song that is, on the surface, indistinguishable from an original, but it will always be a replication, a kind of burlesque.’ (Cave)

Cave makes an excellent point about AI lacking the lived experience of songwriters.

‘Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend. ChatGPT’s melancholy role is that it is destined to imitate and can never have an authentic human experience, no matter how devalued and inconsequential the human experience may in time become.’ (Cave)

Maybe some people will be satisfied with AI writing songs about their broken hearts, but will it ever have as much emotional impact as Lynsay Buckingham singing ‘The Chain’ on stage only meters away from the woman who broke his heart? (Zeiler).

My Project

My main goal as an author has been to explore what it means to be human versus what AI is actually doing. My concern is not that AI will achieve amazing results. The results are already amazing, given the limitations of the machines hosting the AI. My concern is that because AI is so impressive, people will believe that AI is sentient and buy into the idea that the human mind is just a meat computer. My goal is to show people that we are much more than that. For the main project, I intend to add scenes where AI and technology, when misunderstood and abused, can actually demonstrate AI limitations. For example, a programmer could use AI to write a security module without doing adequate testing. If the security module was for a mobile app that gave remote access to a humanoid robot, that could result in a physical threat if terrorists found a vulnerability. The scenario would illustrate the need for caution and the lack of understanding of AI. Dialogue between the programmer and his team leader could spell out the issue dramatically. It would be especially dramatic if there were a personality clash between the programmer and the team leader. If I positioned a writer as the main character and had him targeted by terrorists, I could have the writer discuss issues about AI with an AI specialist. I could use this dialogue to show why doomsday scenarios are portrayed in fiction while present-day reality is less dramatic.

If AI were as good as the hype is making it out to be, that would be one thing, but unlike Sci-fi, which paints the two extremes of utopia and dystopia, the reality is somewhere in between.

Conclusion

So, in summary, AI is transforming the art of writing, but AI is still artificial intelligence. We don’t know where human creativity comes from, so we can’t be certain how to program human-level intelligence, let alone superintelligence, on a computer. Materialists believe that we are mere machines and that it is only a matter of learning more about how our brain works, but they are far from understanding everything about the human mind. And because materialists dominate the tech industry, they are dominating the narrative. While it looks implausible that the current AI models will be able to generate a superintelligence, Journalists from the mainstream media accept that narrative without questioning the underlying philosophical assumptions. That is leading to hype and unrealistic expectations for the future of AI. So when someone makes predictions, we should consider the bias of their worldview before unquestioningly accepting the predictions. Dualists like neurosurgeon Michael Egnor believe the mind is much more than the brain. But that doesn’t mean that AI is not extremely impressive. Using human-generated text, music and images for its database, it can mimic human creativity to a degree that seemed impossible not so long ago. But a human is always in the loop. The problem is that it is seductive. It is easy to trust it. I intend to write educational and entertaining fiction about what AI is really doing. I hope I can help them to think about the more profound philosophical questions.

Works Cited

Bing, Bing AI Search, Microsoft Bing, 28 Nov. 2023, https://www.bing.com/chat

Boyd, E, ChatGPT is now available in Azure OpenAI Service, Azure, 9 Mar. 2023, https://azure.microsoft.com/en-us/blog/chatgpt-is-now-available-in-azure-openai-service/

Brooks, R. Flesh and Machines: How Robots Will Change Us, Vintage Books 2003

ALIFE 2018 Keynote Speaker Day 3, ALIFE 2018, YouTube, https://www.youtube.com/watch?v=DpMYQ-gnzZM

Biography: Rodney Brooks – Roboticist, MIT, https://people.csail.mit.edu/brooks/

SPRING 2023 GRASP On Robotics: Rodney Brooks, Robust.AI, GRASP Lab, https://www.youtube.com/watch?v=IMyG0b-p_GE

Steps Toward Super Intelligence I, How We Got Here , Personal blog, 15 Jul. 2018, https://rodneybrooks.com/forai-steps-toward-super-intelligence-i-how-we-got-here/

WHAT WILL TRANSFORMERS TRANSFORM?, Personal blog, 23 Mar. 2023 http://rodneybrooks.com/what-will-transformers-transform/

Cave, N. ISSUE #218 / JANUARY 2023, The Red Hand Files, Jan 2023, https://www.theredhandfiles.com/chat-gpt-what-do-you-think/

Chubb, J., Reed, D. & Cowling, P., Expert views about missing AI narratives: is there an AI story crisis?. AI & Soc, 25 Aug. 2022., https://doi.org/10.1007/s00146-022-01548-2

Egnor, M., Neurosurgeon Outlines Why Machines Can’t Think, Mind Matters, 17 July 2018, https://mindmatters.ai/2018/07/neurosurgeon-outlines-why-machines-cant-think/

Neurosurgeon Michael Egnor: Why Machines Will Never Think, Discovery Institute, YouTube, 2 Aug 2018, https://www.youtube.com/watch?v=EXOX3RCpEbU

Fodor, J., We’re told AI neural networks ‘learn’ the way humans do. A neuroscientist explains why that’s not the case, The Conversation, 6 Jun. 2023, https://theconversation.com/were-told-ai-neural-networks-learn-the-way-humans-do-a-neuroscientist-explains-why-thats-not-the-case-183993

Godhe, M; Määttä, J; Bodén, D, 2021, A Conversation on AI, Science Fiction, and Work, Fafnir, Volume 8, Issue 2, http://journal.finfar.org/articles/2282.pdf

Gregory, P. H., Seven Ways AI Will Impact Authors And The Publishing Industry, Forbes, 6 Jul 2023, https://www.forbes.com/sites/forbestechcouncil/2023/07/06/seven-ways-ai-will-impact-authors-and-the-publishing-industry/?sh=2f01100323a6

Heimann, R., How the philosophy of mind and consciousness has affected AI research, The Next Web, 17 Apr. 2022, https://thenextweb.com/news/how-philosophy-of-mind-and-consciousness-has-affected-ai-research

Johri, S., The Making of ChatGPT: From Data to Dialogue, Harvard University, 6 Jun 2023, https://sitn.hms.harvard.edu/flash/2023/the-making-of-chatgpt-from-data-to-dialogue/

King, S. Frequently Asked Questions: Why did you write books as Richard Bachman, Stephen King, Accessed 28 Nov. 2023, https://stephenking.com/faq/#1.6

Kravitz, L., Songwriter’s Corner: 16 Influential Quotes About the Art of Songwriting, Guitar Songs Masters, 2018, https://guitarsongsmasters.com/songwriting-quotes/

Marks, R, J; Swindell, September 28, 2023, If ChatGPT Had Children, Would They Be Geniuses or Blubbering Idiots?, Mind Matters, available at: https://mindmatters.ai/2023/09/if-chatgpt-had-children-would-they-be-geniuses-or-blubbering-idiots/

McCartney, P., Paul McCartney knew he’d never top The Beatles — and that’s just fine with him, Georgia Public Broadcasting, https://www.gpb.org/news/2021/11/03/paul-mccartney-knew-hed-never-top-the-beatles-and-thats-just-fine-him

OpenAI, GPT-4 Technical Report, openai.com, accessed 28 Nov. 2023, https://cdn.openai.com/papers/gpt-4.pdf

Searle, J., Chinese room argument, Scholarpedia, 2009, http://www.scholarpedia.org/article/Chinese_room_argument

Shackell, C., Noam Chomsky turns 95: the social justice advocate paved the way for AI. Does it keep him up at night?, The Conversation, 12, Dec. 2023, https://theconversation.com/noam-chomsky-turns-95-the-social-justice-advocate-paved-the-way-for-ai-does-it-keep-him-up-at-night-218034

Shumailov, I; Shumaylov, Z; Zhao, Y; Gal, Y; Papernot, N; Anderson, R, The Curse of Recursion: Training on Generated Data Makes Models Forget, Computer Science, 27 May, available at: https://arxiv.org/abs/2305.17493

Stanford Encyclopedia of Philosophy, The Chinese Room Argument (Stanford Encyclopedia of Philosophy) https://plato.stanford.edu/entries/chinese-room/

Zeiler, M., Zeiler quoting Nicks, S. Top 10 Lindsey Buckingham Fleetwood Mac Songs: #5 The Chain. Classic Rock History, 2022, https://www.classicrockhistory.com/top-10-lindsey-buckingham-fleetwood-mac-songs/