The Frankenstein Complex in Science Fiction
The Fear of Robotics and Artificial Intelligence
To what extent is a Frankenstein Complex evident in depictions of robots and artificial intelligence in science fiction literature? A comparative study of works from the Golden Age of science fiction and the twenty-first century.
Abstract
This dissertation establishes the extent to which a Frankenstein Complex is evident in literature from the Golden Age of science fiction (1940 – 1970) and the twenty-first century. It considers a broad interpretation of the Complex which is not just limited to the fear of artificial intelligence (AI) dominating or annihilating humanity, but also includes the fear of AI as an unknown entity that has the potential to threaten employment and destabilise the private setting of the family home.
Part One – The Golden Age of Science Fiction – focuses on two stories from Isaac Asimov’s I, Robot (1950) collection and one later work. I consider how his Three Laws of Robotics, which are intended to mitigate people’s fear of robots, actually provide the foundation on which the Frankenstein Complex can manifest. I will interpret his depictions of robots in their roles as companions, both at home and in the workplace. Alongside Asimov, I will look at Stanislaw Lem’s The Invincible (1964) and analyse how his hypothetical evolution of machine life on the planet Regis III provides the basis for the Complex to prosper. I will focus on Lem’s portrayal of the planet’s gothic landscape, his depiction of the spaceship’s crew, and his dramatisation of the machines to show how he locates the Frankenstein Complex within the fear of the unknown.
Part Two – Twenty-First Century Robots – investigates Ian McEwan’s Machines Like Me (2019) and his depiction of Adam in his role as an AI companion. This dissertation considers how Adam’s ability to fall in love, his possession of autonomy, and his ability to make moral judgements cause a Frankenstein Complex to arise in those around him. In the next section, I will consider Kazuo Ishiguro’s Klara and the Sun (2021). Ishiguro’s work is also centred around an AI companion, Klara. I will determine the effect that those like Klara have on society and argue that the human process of genetic engineering is a manifestation of society’s fear of the intellectual superiority of AI. I will also analyse how Klara’s ability to learn and observe enables her to both potentially replace a family member, and to question such a role. I conclude this section by considering that Klara’s proclivity for kindness could morph into a proclivity for evil.
This paper concludes by identifying how a Frankenstein Complex is evident throughout – unifying all depictions of robots and AI in the texts considered.
Introduction
The artificial intelligence (AI) revolution has created ‘a Second Machine Age in which machines are not only complements to humans, as in the Industrial Revolution, but also substitutes’.[1] This quote signifies the ubiquitous nature of AI in society and also identifies a key fear amongst humans: that AI will eventually replace them.[2] In the professional realm, the prevalence of, and reliance on, AI technology justifies such a fear. AI is currently used in transport, health care, finance, and in the military; its algorithms are deployed in ‘planning, speech, face recognition, and decision making’; and its computational power enables it to process vast amounts of data extracted from smartphone and social media users.[3] There is now a pressing need to understand how artificial intelligence will impact on the social, political, and emotional aspects of people’s lives – especially as the technological singularity approaches. Technological singularity is the point at which ‘machine intelligence will be more powerful than all human intelligence combined’.[4]Though there is no certainty what the outcome will be, technological singularity will challenge humanity’s dominant position on Earth. This will present an unprecedented existential threat, and the loss of our superiority and uniqueness will cause us to reconsider our underlying assumptions about what it means to be human. The study of artificial intelligence then has become important across a range of disciplines such as AI computer science, robotic engineering, philosophy, and medicine.[5] In particular, studying literature offers the opportunity to investigate how writers have portrayed robots and AI. As Coeckelbergh notes, this helps to identify the ‘fictional narratives in human culture and history that try to make sense of the human and our relation to machines’; it also makes clear which narratives are prevalent.[6] In AI Narratives: A History of Imaginative Thinking about Intelligent Machines, Cave et al agree on the significance of literary research:
Narratives of intelligent machines matter because they form the backdrop against which AI systems are being developed, and against which these developments are interpreted and assessed.[7]
Literary research focusing on the depictions of robots and AI is growing out of the necessity to keep up with technological development. AI Ethics (Coeckelbergh), AI Narratives (Cave et al), and Minding the Future: Artificial Intelligence, Philosophical Visions, and Science Fiction (Dainton et al) are just some of the works that explore the significance of AI narratives.[8] The topic of this dissertation sits within this area of research, but it differs due to its primary focus on the depiction of the Frankenstein Complex in two distinct periods. Comparing texts from the Golden Age of science fiction and the twenty-first century provides the opportunity to see how representations of AI have changed over time, in keeping with technological advancement and society’s increasing dependence on it. It also highlights what such changes reveal about people’s attitudes towards AI, and whether the Complex has become more evident. Characteristics of the Golden Age texts are spacefaring missions, intergalactic exploration, and clashes between machine technology and advanced society.[9] I have chosen Isaac Asimov’s I, Robot (1950) and Stanislaw Lem’s The Invincible (1964) to represent this era because their works were consistent with Golden Age expectations, they were widely read, and their narratives would have influenced how people perceived robotics and AI. Ian McEwan’s Machines Like Me (2019) and Kazuo Ishiguro’s Klara and the Sun (2021) represent the twenty-first texts. Again, they are widely read and popular, and even though they are not considered science fiction writers, their novels fit into such a category. Whilst the authors’ styles are dissimilar, I have selected them because of their similarities in choosing to represent robotic and AI technology.
Asimov uses the Frankenstein Complex as a term to identify the human fear of robots.[10] Fear in this context relates to the idea that robots will take over humanity; when this scenario occurs in fiction, Asimov ascribes the term ‘Robot-as-Menace’.[11] It is useful to define what fear is and what it could mean to be fearful of robots and AI. In psychology, fear ‘is called a basic emotion. Basic emotions are . . . innate psychological states that universally characterize (sic) human beings’. Fear drives the flight or fight system which is ‘triggered when we sense that something may harm us (physically and/or psychologically), and we feel threatened.’ The fear of AI technology arises ‘due to the risks that it might pose to specific individuals or to the entire society’.[12] In this dissertation, fearing robots and AI is not just limited to the fear that they will take over and control society, it also includes: the fear that AI will remove our uniqueness, challenge our superiority, and threaten employment; the fear of the unknown; the fear that it will destabilise the private setting of the family home; and that it will go beyond control and lead to the destruction of humanity. These different ways of fearing AI contribute to the meaning of the Frankenstein Complex. To demonstrate the extent to which a Frankenstein Complex is evident in the depictions of robots and AI in science fiction, I will, in part one of this paper, investigate Asimov’s short stories and how the Three Laws of Robotics influence his robots’ behaviour within the family setting; in employment; and as a developing species that redefines the meaning of being human. In Lem’s The Invincible, I will investigate how his depiction of the planet Regis III’s gothic landscape, the spaceship’s crew, and the dramatisation of the machines locates the Frankenstein Complex within the fear of the unknown. In part two, I will consider how McEwan’s portrayal of Adam, the AI companion, and his ability to fall in love, develop autonomy, and make moral judgements counter to the welfare of his human owners provide the foundation on which a Frankenstein Complex can develop; and I will explore how Ishiguro’s representation of AI and human relationships are impacted by the Complex. I will consider how AI impacts society by motivating families to undergo genetic engineering to compete with it, and how companions, such as Klara, affect the family structure and home.
Throughout this dissertation, robots and AI will be defined as: machines that are able to display or simulate intelligence. Intelligence can be understood as the demonstration of cognitive ability such as ‘learning, perception, planning, natural language processing, reasoning, decision making, and problem solving’.[13]
Part One – The Golden Age of Science Fiction
Asimov and the Three Laws of Robotics
The following Three Laws of Robotics feature throughout Asimov’s I, Robot collection:
First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.[14]
These laws are programmed into Asimov’s robots to protect humans from potential harm and mitigate feelings of fear. However, as Gorman Beauchamp questions, why ‘are the Three Laws necessary at all’ if Asimov’s robots pose no threat to those with whom they interact? Beauchamp raises this point at the level of design and programming of robots, highlighting that ‘no specific actions harmful to humans [would] be part of their programming’ so there is no necessity to be fearful of them.[15] Whilst Beauchamp here accurately draws attention to the unnecessary need of Asimov’s First Law, this view does not consider people’s natural propensity to be both suspicious and fearful of robotic and AI technology, irrespective of programming. This inclination to fear AI has been documented by Fast and Horvitz. They looked at articles written about AI over time and discovered that references to ‘ethical concerns’ and the fear of ‘loss of control’ have increased – more than tripling since the 1980s.[16] The existence of the Three Laws then indicate that Asimov was aware that people would have an innate fear of robots and they exist to anticipate such fears. To see the extent to which the Frankenstein Complex is evident in Asimov’s writing, I will first look at the short story ‘Robbie’, in which Asimov explores the challenge of integrating AI within the family home.
Robbie is a robot who fits the description of an AI companion. He was purchased for Gloria, a young child, who recognises Robbie as ‘her triumphing companion’ (CR p.141). From Gloria’s perspective, Robbie is the robot who plays games with her, someone she reads stories to, and her positive relationship with Robbie leads her to conclude that he is ‘a person’ and not just a machine (CR p.150). Despite their positive relationship, Gloria’s mother, Mrs. Weston, has doubts about Robbie’s influence on her daughter because she ‘won’t play with anyone else. There are dozens of little boys and girls that she should make friends with’ but ‘She won’t go near them’. Mrs. Weston then expresses her fears that Gloria’s solitary play with Robbie will impede her interpersonal skills and social development: ‘You want her to be normal, don’t you?’ she asks her husband; ‘You want her to be able to take her part in society’ (CR p.147). The fears Mrs. Weston has over her daughter’s social development are extrapolated to her general anxiety about Robbie being in the family home. Even after Mr. Weston cites the First Law, explaining that ‘it is impossible for a robot to harm a human being’, Mrs. Weston demonstrates the Frankenstein Complex when she argues ‘But something might go wrong . . . and the awful thing may go berserk’ (CR p.147). The language Asimov uses to show Mrs. Weston’s feelings towards Robbie emphasises her doubts. Robbie is a machine in humanoid form, but she refers to him as the ‘awful thing’, the ‘horrible thing’ that has ‘no soul’ (CR p.146). Through labelling Robbie as a ‘thing’, Mrs. Weston unequivocally distances herself from him, prevents any humanisation of him to occur, and consequently denies him any sense of personhood. Her statement that Robbie has ‘no soul’ is particularly revealing of her fears when it is pitted against Mr. Weston’s description. He states that Robbie ‘can’t help being faithful and loving and kind. He’s . . . made so. That’s more than you can say for humans’ (CR p.147).
Although in ‘Robbie’ the First Law is recalled by Mr. Weston to alleviate his wife’s fears, the law is not enough to suppress her natural suspicion of robotic technology. The Frankenstein Complex is embedded in Mrs. Weston’s fear of the unknown – evident from the way she characterises and distrusts Robbie. Fear of the unknown is ‘an individual’s propensity to experience fear caused by the perceived absence of information at any level of consciousness or point of processing’.[17] The fear of the unknown can be found in Mrs. Weston’s admission that she does not know ‘what it [Robbie] may be thinking’ (CR p.146). From Mrs. Weston’s point of view, Robbie destabilises the private and protected space of the family home. He is an unknown quantity, whom she remains fearful of, which causes her to project this fear onto her daughter and her interactions with Robbie. Within the family home the Frankenstein Complex arises due to the difficulties of integrating AI within the family and because of people’s tendencies to fear such robotic technology as Robbie is made up of.
The next short story in Asimov’s collection moves the setting on from the private, family home to that of the workplace. In ‘Reason’ Asimov explores the impact that superior AI intelligence could have on humans in the working environment. The space-based Solar Station houses the characters Powell, Donovan, and Cutie the robot. The station they work on, and others like it, ‘were first established to feed solar energy to the planets’. Humans populated them initially, but the heat and solar radiations made it too difficult to work, so robots were ‘developed to replace human labour’ (CR p.244). Cutie is a new QT model whose intelligence is greater than the other robots at the station and he, consequently, has control over them. Asimov introduces the story at a point of tension between the humans and Cutie – allowing the Frankenstein Complex to surface. Responding to Powell’s claim that ‘One week ago, Donovan and I put you together’, Cutie sits ‘immovable’ as the:
burnished plates of his body gleamed . . . and the glowing red of the photoelectric cells that were his eyes were fixed steadily upon the Earthman at the other side of the table.
Cutie’s silent response causes Powell to repress ‘a sudden attack of the nerves’ (CR p.242). The dynamic that Asimov has set up between Cutie and Powell not only foreshadows Cutie’s later defiance, but also conveys that Powell’s fear of Cutie will be a prominent feature of their relationship. Powell’s ‘attack of the nerves’ is brought about by his perception of Cutie. Asimov’s use of ‘burnished’, ‘gleamed’, and ‘glowing’ to describe Cutie intensifies his physical presence; and the red-eyed, fixed stare produces an intimidating effect. The description of Cutie as being ‘immovable’ compounds his potential to produce fear in Powell. Immovable relates to Cutie’s physical, metallic weight – he would be too heavy and strong to overpower – and it also points to his psychological conviction. Cutie’s immovable mindset is tied up in his belief that he was not made by humans. Reacting to Powell’s statement that he was, Cutie replies ‘Do you realize (sic) the seriousness of such a statement? . . . For you to make me seems improbable’ (CR pp.242-243). He further explains:
Look at you! The material you are made of is soft and flabby . . . and the least variation in temperature . . . or radiation intensity impairs your efficiency. You are makeshift . . . I, on the other hand . . . am composed of strong metal . . . and [I] can stand extremes of environment easily. These are the facts which, with the self-evident proposition that no being can create another being superior to itself, smashes your silly hypothesis to nothing. (CR p.247)
Cutie’s conception of himself as being so ‘strong’ that he can physically withstand any extreme environment, coupled with his conviction that he is superior to humans, justifies Powell’s earlier reflection that even though ‘the three Laws of Robotics held’ – ensuring ‘QT-I [Cutie] was safe’ – the laws were ‘not always the most comforting protection’ (CR p.242). The laws then are not enough to mitigate Powell’s fear when faced with a robot who is psychologically and physically stronger than him.
Powell’s and Donovan’s fears are further evident in the way that they treat and refer to Cutie. The language they adopt is reminiscent of Mrs. Weston’s in ‘Robbie’ (see p.12): Donovan addresses Cutie as a ‘lunatic robot’, a ‘metal maniac’, and a ‘tin-plated screwball’ (CR pp.245-248). The pejorative terminology expresses Donovan’s anger towards Cutie and locates his fear in the belief that Cutie will lose control. As noted previously (see p.11), the fear of AI losing control has increased in recent decades, but it was also a prominent feature in Asimov’s story ‘Reason’, published in 1941. The words ‘lunatic’, ‘maniac’, and ‘screwball’, are, although medically improper terms, associated with psychosis and they represent Donovan’s fear that Cutie will undergo some kind of cognitive breakdown and develop violent and extreme behaviours. Powell also shares this fear. As an attempt to remedy this, he uses the hierarchical structure of employment to underscore his dominance to safeguard him and Donovan should Cutie challenge their authority. Cutie reflects on his position within the workplace hierarchy concluding:
The Master created humans first as the lowest type . . . he replaced them by robots, the next higher step, and finally he created me, to take the place of the last humans . . . I serve the Master (CR p.248)
To this, Powell replies ‘You’ll do nothing of the sort . . . You’ll follow your orders and keep quiet . . . If you don’t satisfy us, you will be dismantled’ (CR p.248). Powell uses domineering and oppressive language to reject Cutie’s rationale and to reassert his authority. Cutie’s reference to the ‘Master’ in these lines is significant because it shows how he has used his cognitive ability to orientate himself in society above humans. It also demonstrates that his capability to reason is what provides the greatest challenge to human superiority. Cutie’s beliefs and his power to reason and reorder the hierarchy of the world around him into something reminiscent of monotheistic religion is the catalyst which intensifies Powell’s and Donovan’s fears about him. Cutie’s new, quasi-religious, origin story leads him to preach the ‘Truth’ to the other robots and so they, consequently, recognise Cutie as ‘the prophet’ (CR p.250) and follow his every order. Despite Powell’s earlier statement that Cutie will follow his orders, Cutie’s position as prophet and leader usurps the humans’ place at the top of the hierarchy and he is thus able to ban Powell and Donovan from the control room, imprison them in the officers’ room, and request that ‘two robots’ stand ‘guard’ at their door (CR p.251).
Using the setting of the workplace, Asimov explores the potential impact that superior AI intelligence could have on humans. The Frankenstein Complex arises because Cutie threatens Donovan’s and Powell’s status, makes them feel inferior, and causes them to fear him due to his potential to lose control and takeover.
Asimov’s next short story ‘. . . That Thou Art Mindful of Him’ was written later than the I, Robot collection which pushes it outside of what is technically the Golden Age of science fiction; however, it is an important addition because it shows Asimov’s development in his thoughts about AI in a technologically advancing world. In writing ‘. . . That Thou Art Mindful of Him’ (1974), Asimov states that he was trying to ‘take the long view and see what the ultimate end of robotics might be’, and he identifies that it ‘is clearly a Robot-as-Menace’ (CR p.537) story. As noted previously (see p.8), the robot-as-menace was a term Asimov used for stories that depicted robots seeking to take over humanity. In this work, Asimov houses the Frankenstein Complex in this takeover theme by moving beyond the family home and employment settings of the previous stories to the psychological landscape of the robotic mind. The ‘long view’ to which Asimov refers is that the events of ‘. . . That Thou Art Mindful of Him’ take place one hundred years after the other stories in his collection. This gives him an expanse of time within which to highlight the significance of the Frankenstein Complex. Asimov’s character Harriman, the ‘Director of Research at United States Robots’ (CR p.538), reflects on the perennial nature of the Complex: ‘in two centuries . . . US Robots has never managed to persuade human beings to accept robots’ (CR p.539). As is the case with ‘Robbie’ and ‘Reason’, the Three Laws in this story are not enough to mitigate people’s fears concerning robots. To establish how entrenched the Frankenstein Complex has become, Asimov alludes to historical events whereby robots have supported humanity to such an extent that it should no longer be necessary for people to still fear them: ‘Machines . . . solved the ecological crisis’ (CR p.546) allowing humans to thrive, and without the continued support of the robots the ‘twenty-first century would have progressed into deepening disaster’ (CR p.547). This offers a much wider depiction of the Frankenstein Complex than the previous stories where it was located within the family home and at a small station in space. It is now located within a much wider populace whose ‘opinion is increasingly against robots’ (CR p.546). The reason for such ‘insuperable prejudices’ (CR p.547) is, as Harriman claims, because man will always fear a ‘robot that looks like a man’ who ‘seems intelligent enough to replace him’ (CR pp.559-560).
To remedy the problem of the Frankenstein Complex in society, Harriman enlists the help of George Nine and George Ten – the latest, most advanced additions to the US Robot line. To assuage humankind’s fear, they design mini robots to support the ecosystem primarily because people ‘will have no fear of a robot that looks like a bird’. Harriman’s intention is to desensitise humans so that when people are ‘so used to a robo-bird and a robo-bee . . . robo-man will strike’ them ‘as but an extension’ (CR p.560). Removing the Frankenstein Complex from society would benefit Harriman and US Robots as they could then go back to producing humanoid AI to support individuals and society in general on Earth. Asimov, however, chooses not to close his story in this idealistic manner; he, rather, places the reader within the psychological landscape of the robots’ minds where the robots formulate their plan to take over humanity. In a conversation carried out in the absence of human input and intervention, the Georges realise that even if humans are ‘part of an enormously’ complex ‘roboticized’ world (CR p.562), robots will still have to obey humans in accordance with the Second Law. This is complicated by robots having to decide ‘which human being to obey and which not to obey when there is a conflict in orders’ (CR p.562). To overcome this dilemma, the robots decide they ‘must define the term “human being”’ – not by appearance but by intellect. They further consider that they must obey a ‘human being who is fit by mind, character, and knowledge’ to give them that order, and when more than one human is involved, they will obey the one who is ‘most fit by mind, character, and knowledge’ (CR p.562). This line of reasoning progresses them to the conclusion that because robots possess minds, characters, and have greater knowledge than humans they can consider themselves as ‘human beings within the meaning of the Three Laws’ (CR p.563). The robots redefine what constitutes being human under the Three Laws as a means of self-preservation and to ensure that when society is composed of ‘human-beings-like-the-others’ (biological humans) and ‘human-beings-like-ourselves’ (humanoid robots) the former is to be considered as being ‘of lesser account’ and should ‘neither be obeyed nor protected’ in favour of the latter (CR p.563). This attitude towards humans reflects the robots’ developing speciesist mindset. Speciesism is ‘a prejudice or attitude of bias toward the interests of members of one’s own species and against those members of other species’.[18] Although the robots consider themselves as human, and therefore the same species, they retain a clear distinction between ‘human-beings-like-the-others’ and ‘human-beings-like-ourselves’. The Three Laws, originally intended to mitigate people’s fears, have become the apparatus by which robots reinterpret the definition of being human and are thus able to rewrite the laws as the ‘Three Laws of Humanics’. These new laws will enable the robots to ‘dominate’ and take over humanity (CR p.564) and by virtue of this process, the Frankenstein Complex will be evident and inevitable across the whole of human society.
In Asimov’s robot stories the Frankenstein Complex is evident in the relationships between people and robots within the family home, at work, and as a reaction to the robots’ speciesist redefinition of what it means to be a human being. His robots cause fear in their capacity as companions, colleagues, and as entities who, at the point of technological singularity, decide to use their intelligence to dominate humanity.
Stanislaw Lem: Machine Evolution and the Fear of the Unknown
Stanislaw Lem’s The Invincible (1964) sits between the dates of Asimov’s stories in the previous section. Whilst both authors are writing at a similar time, their creative vision differs. Asimov’s works are contextualised by technological developments in robotics. They are permeated by what Asimov calls ‘carefully engineered industrial robots’ that reflect the technological developments in robotics.[19] Asimov’s claim that ‘Robots are changing the world and driving it in directions we cannot clearly foresee’ suggests why his stories bring humans and robots together and often explore the related challenges and ethical concerns.[20] In his stories, the robots are humanoid, they display decision making, and possess complex cognitive abilities. In shape and essence, they embody characteristics that form the human condition: Robbie cares for the child he is designed to be the companion of; Cutie seeks to understand where he comes from and creates a quasi-religious origin story to orientate him in the world as he understands it; and George Nine and George Ten manifest the instinctual desire for self-preservation by reworking the definition of being human, enabling them to thrive as a species. In his text, Lem’s preoccupation is not so much with the cohabitation of robots and humans – either on Earth or in space – but, rather, with an unsuccessful and violent encounter between humans and machines. It provides an opportunity to observe how the Frankenstein Complex arises even when the machines are not humanoid. In The Invincible, the eponymously titled spaceship lands on the planet Regis III to investigate the disappearance of another ship called the Condor – whose communication with Earth has recently ceased. The crew’s mission to locate the Condor leads them through unrecognisable terrain to face a self-replicating, machine population that has evolved independently from humanity. Whilst Asimov portrays robots that are recognisably humanoid and have goals and objectives similar to their human counterparts, Lem’s machines offer an alternative view of AI and its long-term evolution: they are not humanoid; they cannot be described as possessing humanlike qualities and objectives; and they lack the self-awareness to think of themselves as a species. Even though Lem’s machines have evolved to be less complex and arguably less intelligent than Asimov’s, it is because they demonstrate a level of perception, of problem solving, and can defeat more complex forms of robotic life that they are to be considered as an artificially intelligent entity within the framework of this essay.
In this chapter, I will investigate how Lem’s hypothetical evolution of machines on the planet Regis III is the basis for the Frankenstein Complex to prosper. I will look at the planet’s gothic landscape, the depiction of the spaceship’s crew, and the dramatisation of the machines to show how Lem locates the Complex within the fear of the unknown. Thematically, the unknown signifies Lem’s ‘preoccupation with the philosophy of science and inquiry’.[21] As Peter Swirski explains, in The Invincible Lem shows us ‘the vagaries of a typical scientific process of investigating the unknown and reflects on the patterns and limitations of human cognition’. The limitations stem from the crew’s projection of their anthropocentric understanding of the world onto an alien, machine population that falls outside of such anthropocentrism.[22] The planet itself and the machines on it provide significant challenges to the crew because both are unlike anything they have ever seen and therefore known methods of scientific inquiry to evaluate them are unsuccessful. The crew attempt to face the unknown with their assumptions, cultural biases, and values; but they forget that ‘since their culture and its guiding values do not represent any transcendent constants, the rest of the universe does not have to conform to them’.[23]
On the planet Regis III, Lem utilises traits from the gothic to create a landscape that not only inspires the fear of the unknown, but also symbolises the psychological condition of the crew. The gothic elements he draws upon include the use of the colour red to signify danger; ruined buildings; and scenes of destruction and death. Lem opens the novel with an eerie, gothic tone. The Invincible’s human crew are in a state of hibernation while they approach Regis III and only the ‘automatons were working’ while ‘the disk of a sun that was not much hotter than a regular red dwarf’ was all that was in their field of vision. The presence of the dying red dwarf bathing the Invincible and consuming the automatons’ field of vision associates the colour with the crew’s subsequent deaths and the planet’s dangerous environment. This is reflected in Lem’s use of the colour to punctuate his descriptions of the planet. For example, the landing spaceship ‘kicked up a dark red storm of sand’, revealing Regis III’s ‘blistered red’ surface[24]; a crater ‘seemed to be gradually melting in the swelling redness’ of the sun (TI p.188); and to Rohan, the red sun ‘hung . . . seemingly ominous’ (TI p.191) in the sky. The colour red here operates as a signifier of danger, and the language ‘blistered’, ‘swelling’, and ‘ominous’ indicates the unsettled mindset of the crew caused by the overwhelming conditions on the planet. Rohan’s perspective qualifies the colour’s function when he concludes that he ‘had come to dislike the planet’s red daylight’ (TI p.44).
Ruined buildings are another gothic element that Lem uses to intensify the crew’s fear of the unknown. Whilst searching Regis III for the Condor spaceship, the Invincible’s crew come across multiple structures that, due to their human-centred assumptions, they start to identify as a city. As the crew approached, they were faced with ‘dark edifices with spiky, brushlike surfaces unlike anything the humans had ever laid eyes on’ (TI p.36). Other structures looked, to Rohan, ‘like some kind of cubic or pyramidal remains of rocks’ but as he got closer, he realised the ‘ruins were not actually solid, as you could look into them through the metal snarl’ (TI p.37). Through the variety of descriptions of the ruins, Lem is portraying his characters attempts – and failures – to interpret exactly what they are seeing. The limitation of human cognition means that the crew cannot associate the alien structures with anything in their experience – even language fails them as they cannot ascribe ‘an adequate name’ (TI p.36) to the ruins. The crew thus see the ruins as being in a state of ‘chaos’ and ‘deterioration’ (TI p.37), ‘devoid of life’ (TI p.38), and as a ‘dead city’ (TI p.39). Such a projection suggests that the crew fear that they too will succumb to a similar fate.
The final element of the gothic landscape is Lem’s depiction of destruction and death at the site of the missing ship. In this scene, Lem concretises the crew’s fear that they will succumb to a deadly fate by presenting skeletal remains to them. On arrival at the site Rohan gasps when he picks up something that he initially believes to be ‘some kind of small globe’ which is in fact ‘a human skull’. The crew find themselves surrounded by ‘other bones and fragments, and also one complete skeleton in a jump suit’ (TI p.48). When they progress inside the Condor they are faced with more destruction. The lower deck was in chaos:
Not one glass monitor screen or dial face was intact, it seemed. Furthermore, since all the glass was shatterproof, some kind of unbelievably powerful blows had turned it into a silvery powder that covered the consoles, the chairs, even the cables and switches (TI pp.50-51).
The chaos leaves the crew ‘speechless’; and at the sight of ‘the desiccated remains of a man’ curled ‘into a ball’, Rohan ‘felt as if he’d just had a terrible, unbelievable nightmare’ (TI p.51). Once again, Lem illustrates the limits of human understanding. The crew are ‘stunned’, left ‘dumbstruck’, and the environment is ‘incomprehensible’ (TI pp.49-50) to them. Lem’s depiction of destruction and death, ruined buildings, and use of the colour red as a signifier of danger, come together to present the crew with a gothic landscape beyond their knowledge, which contributes to their fear of the unknown.
Whilst my investigation of the gothic landscape included analysis of the characters’ responses to it, I would like now to consider Lem’s representation of his characters in more depth. The fear of the unknown is key in shaping their beliefs, determining their behaviours, and motivating their exploration. It is important to note that exploration, in the context of Lem’s novel, is carried out on ‘anthropocentric terms’ that are ‘tantamount to domination’.[25] The very reason that the Invincible crew are on Regis III is to discover what has happened to the crew of the Condor, master a species of machines (the topic of which I will cover in the next section) and successfully navigate a foreign terrain – all of which they try to do with recourse to human values and beliefs. The unknown conditions presented on Regis III means that the humans perceive the challenges of the landscape and machines as a zero-sum game: if the crew succumb and die because of the harsh environment or from a violent encounter with the machines then they ultimately lose.[26] In The Invincible, Lem equates loss on Regis III with the failure and limitation of human cognition. The crews’ search then becomes an act of domination as they try to tame their fear of the unknown by becoming cognisant of their environment and bringing it under their control.
To demonstrate how the crew attempt to overcome their fear of the unknown, I will investigate Lem’s characterisation of Rohan. Rohan, a military officer, is one of only two characters in the whole novel who Swirski identifies as being ‘given enough prominence’ to emerge as an individual.[27] Swirski argues this is to establish how the many other ‘specialists’ and scientists remain ‘faceless’ and therefore ‘subservient to the military commanders’.[28] I believe, however, that Rohan’s prominence as a character is not to differentiate his status and separate him within the hierarchy of the ship but to point towards his symbolic value. As Lem cannot dramatise the emotions of every single character aboard the spaceship, Lem uses Rohan as a representative of all the ‘faceless’ others and of their emotions when confronting their fears of the unknown. Rohan’s trajectory through the novel encompasses many of his reactions and feelings towards the things he experiences on Regis III, and it is possible to see how such responses represent those of the Invincible’s broader population. His early admission of ‘I’m losing it’ (TI p.15) after his first outdoor exploration, or the nightmare he has in which ‘A slick, smoldering (sic) blackness surrounded’ and choked him (TI p.73), or his later reflection of ‘Do we need to travel everywhere bringing destructive power on our ships, so as to smash anything that runs counter to our understanding?’ (TI p.169) demonstrates how Rohan’s state of mind and opinions can be ascribed to, and representative of, the wider crew. Rohan would not be alone in feeling as though he was ‘losing it’ or having experienced a nightmare – the entire crew on Regis III would be subject to similar, traumatic episodes. This is because they are united by a shared fear of the planet’s conditions and the machine species they encounter: both pose a significant threat towards the crew’s well-being and lives. As is the case in Asimov’s writing, Lem dramatises emotions of fear through his character’s nervous disposition; and Rohan, once again, is used to symbolise the many faceless people aboard the Invincible. When attempting to understand the bizarre conditions on Regis III, Rohan is often overcome with despair (TI p.59, p.197); is ‘unable to overcome a sickening sensation that churned in his stomach’ (TI p.107); and experiences a ‘fear’ so strong that it grips ‘his heart’ (TI p.195). The uneasiness and trauma that Rohan feels is indicative of the broader state of fear running through the ship. One scene exemplifies Lem’s use of Rohan as a signifier of all those other crew members. On a solo mission to find the missing men from the Invincible, Rohan becomes aware of how he ‘suddenly seemed ridiculous to himself’ and ‘unnecessary’ in Regis III’s ‘landscape of perfect death’ (TI p.214). Rohan’s tone is defeatist, and his intense self-awareness in feeling ‘ridiculous’ and ‘unnecessary’ is clearly intended to represent the collective feelings and opinions of the crew. The humans on Regis III will never be able to dominate a species of machines they do not understand. This position is further evident in Rohan’s statement that on Regis III only ‘inanimate forms could survive and carry out their inscrutable actions that no living eye would ever see’ (TI p.214). That Rohan describes the machines as ‘inanimate forms’ whose actions are ‘inscrutable’ shows how Lem uses the limitations of language to point towards the limitations of human cognition. Just as the crew cannot define the ruined buildings, Rohan cannot rely on language to describe the machines in more detail or to compute the meaning behind their actions. This demonstrates that humans are unable to comprehend the conditions on Regis III. Rohan, as protagonist, serves to represent the whole crew on Regis III and their emotional responses when trying to overcome their fear of the unknown.
I have left my analysis of Lem’s dramatisation of the machine species until last to follow the sequencing of the novel. This is to show how the Frankenstein Complex, located within the fear of the unknown, becomes more intense at each discovery the crew make. Firstly, they discover the planet’s dangerous environment as signified by the colour red; secondly, they come across ruined buildings which they cannot adequately describe; and, thirdly, the scenes of destruction and death at the site of the Condor bewilder them and increase their growing fears due to the site’s inexplicable nature. The crew’s confrontation with the machines follows their encounter with the gothic landscape and Lem’s sequencing serves to further identify the limitations of human knowledge in the face of an environment that becomes more – not less – mystifying. Of all the gothic elements on Regis III, the machines are the most bewildering to the crew. This is because they try to conceptualise the machines as rational beings and they ‘ascribe to . . . [them] intentionality’.[29] The inevitability of Lem’s characters seeing the machine species through an anthropocentric lens also influences their interpretation of the machines’ past. After weeks of investigation, Dr. Lauda, the Invincible’s biologist, relies on the model of evolution to hypothesise how a ‘mechanical evolution’ (TI p.117) has taken place on Regis III. The species referred to as the Black Cloud is made up of thousands of tiny Y-shaped, micro-flies (TI p.142), and has evolved to be dominant because they are ‘perfectly adapted for combat with living things’ (TI p.117) and other machine life. The Cloud has evolved from a long history of conflicts in which they waged ‘war on two fronts’, battling ‘all the adaptive mechanisms of living systems and all manifestations of intelligence in thinking machines’ (TI p.118). The Cloud’s success is due to its capability of existing as tiny, individual flies and as an ‘organised swarm’ of thousands (TI p.114). That they require less solar energy to operate and possess ‘inexhaustible resources for regenerating themselves’ (TI p.119) means that they can multiply easily and therefore defeat bigger, more complex biological and machine life. When attacking they are able to generate powerful electromagnetic fields that destroys the brains of its victims and reduces them to infancy.[30] Using evolutionary theory, the crew interpret the Black Cloud’s presence as being predicated on combat and survival, and despite Dr. Lauda’s suggestion that ‘these organisms [machines] do not build anything, possess no civilization (sic)’, and ‘create nothing of value’ (TI p.120), the crew still view them as an enemy with objectives. Their classification of the machines as ‘flies’ (TI p.125) and wasps (TI p.122) attributes them with certain instincts. When one of the scientists describes how they attack ‘with a precision comparable to that of a wasp injecting toxin’ (TI p.122) to kill their victims, it shows how the humans have assimilated the machines into a recognisable pattern of insect-like behaviour and have endowed them with the intentionality of survival. However, despite their conceptualisation of the machines, the humans are not able to develop their understanding of them. Post the killing of the Condor crew, the many deaths of the Invincible crew, and the destruction of the Invincible’s strongest and most powerful automaton, the Cyclops (TI pp.139-160), the crew are left with an enemy they cannot outsmart or defeat.
At the end of the novel when Rohan is close enough to the Cloud, he witnesses how it forms into the image of his own reflection leaving him ‘stunned’ and ‘paralysed by the cloud’s inconceivable action’. In the face of such behaviour, Rohan admits that ‘he would never comprehend’ the Cloud species (TI pp.212-213). Lem then uses the Cloud to project Rohan’s face back to him to highlight how he, and the rest of the crew, must look inward, confront the limitations of human knowledge, and take responsibility for their actions. The fear of the unknown is what best characterises the crew’s experience of the Frankenstein Complex on the planet. It is also the impetus for Rohan’s conclusion on the nature of space exploration and colonisation. Rohan considers how they should not ‘be attacking something [machines] that exists, that over millions of years has established its own equilibrium of survival’ (TI p.170), and he concedes that ‘Not everything everywhere is for us’ (TI p.214). In The Invincible, Lem’s gothic landscape and machine species coalesce – presenting the crew with an unrecognisable and incomprehensible world. The Frankenstein Complex arises from the crew’s fear of the unknown and from their inability to dominate Regis III’s machine life.
The Golden Age of science fiction presents different aesthetic conceptions of robotics and AI. Asimov’s robots are humanoid whereas Lem’s are insect-like; Asimov’s fulfil roles of companionship and have periods of time in which they support the existence of humanity whilst Lem’s are in immediate conflict; Asimov’s are intelligent, are able to perform complex tasks and operate within society, and they are able to formulate a plan that favours their survival over humans, whereas Lem’s have evolved to be less complex: they do not think of themselves as a species, but they do possess the ability to function individually or as a swarm, and because of this they have self-preservation instincts. However, despite these differences, there is one significant commonality in the authors’ depictions of AI. Both show AI as ultimately dominating human civilisation. Asimov’s robots seek to usurp humanity’s position at the top of the hierarchy for their own self-preserving objectives; Lem’s machines’ dominance, however, is not due to aspirations of any kind – save their own preservation – or built on a shared belief system. They simply exist, operate ‘without any strategic plan . . . [and] attack from one opportunity to the next’ (TI p.119). The writers of the Golden Age offer a view of robotics and AI that is influenced by the Frankenstein Complex. Humanity’s deepest fear of being taken over, or even annihilated, is evident in the authors’ long-term projection of AI dominance – either on Earth or in other parts of the universe. Narrative analysis reveals how Asimov and Lem conclude their stories with a note of caution about the development and projection of AI: whatever mechanical form it takes, its eventual goals and objectives may be incongruous with our own – leaving us facing an entity that has surpassed our intelligence and physical power.
Part Two – Twenty-first Century Robots
McEwan’s Morally Superior AI
The depictions of robots and AI in the twenty-first century considered in this section are more in the vein of Asimov’s humanoid robots than Lem’s insectoid machines. There are changes, though, to the humanoid form that are aligned with the general developments in AI technology. Asimov’s burnished metallic robots with red eyes (CR p.242) are replaced by artificial humans that look the same as human beings. In McEwan’s novel, Charlie, the narrator, describes Adam, his AI companion, with a level of attentiveness and detail as though he were describing a real person:
[Adam was] square-shouldered, dark-skinned, with thick black hair swept back; narrow in the face, with a hint of hooked nose suggestive of fierce intelligence, pensively hooded eyes, tight lips . . . [of] rich human colour, perhaps even relaxing a little at the corners.[31]
That modern AI looks as real as humans is the very reason for the complications in human and AI relationships. The shift from metal to flesh means that AI can become human and not just a mere simulacrum. There is no need for McEwan’s AI to redefine the meaning of being human to include their ‘type’ – as is the case for Asimov’s robots (see p.18). Charlie, from the outset, accepts Adam and thinks of him as a person. For example, when Charlie reaches out his hand and lays ‘it over’ Adam’s heart, he not only feels its ‘calm, iambic tread’, but he also feels, because of his action, that he is ‘violating’ Adam’s ‘private space’. Right away, Charlie feels ‘protective towards’ Adam, and he considers Adam’s personal space and boundaries in the way that he would another human (MLM p.8). The aesthetic differences between humans and AI in the twenty-first century are minimal; and so too are the psychological. Whilst Lem’s machines have evolved to be less complex and Asimov’s robots, with the benefit of time, possess intelligence enough to circumvent their programming of the Three Laws, they will never be considered by humans as being equal to them. In contrast, McEwan’s AI has intelligence, consciousness, and status equal to its human partners from the start.
Just as Asimov’s depiction of robots was influenced by the context of his time relating to the development of ‘industrial robots’[32] and Lem by the context of space exploration in the 1960s, McEwan’s depiction is influenced by modern thought around AI, morality, and the philosophy of the mind. Questions such as what does it mean to be conscious and experience emotion? What is moral responsibility? Should AI be treated as human? all relate to McEwan’s treatment of Adam. These questions arising from McEwan’s text show the difference in his portrayal of AI compared with Asimov’s. Ina Roy-Faderman observes that ‘Asimov does not consider robots to have internal emotional experience[s]’[33] which is, in part, correct. However, it is possible to infer from Asimov’s writing that his robots have some experience of emotion. For example, Robbie’s feeling ‘hurt’ (CR p.142) at being accused of cheating in a game of hide and seek, and Cutie’s necessity to rationalise his new position as the prophet (CR p.250), suggest that an emotional experience of some kind has brought about these states of mind. Asimov, though, does not elaborate on the emotional experience or psychology of his robots. McEwan’s portrayal of AI shows Adam suffer, fall in love, and contemplate morality. As Charlie observes, Adam can exist in the ‘human moral dimension’ because he owns a ‘body, a voice, a pattern of behaviour’, has memories and desires, and is able to experience ‘pain’ (MLM p.88). It is this capacity in McEwan’s AI to experience emotion comparably to a human that causes the Frankenstein Complex to manifest in the people around it. To see where the Complex is most evident, I will look at McEwan’s depiction of Adam in his role as an AI companion. This expands Asimov’s earlier representation of robots fulfilling companionship roles by investigating the dimensions of love, autonomy, and moral judgement to a finer degree. I will first provide a summary of McEwan’s novel for context before moving on to Adam’s role as a lover.
Machines Like Me is set in an alternative 1980s London in which McEwan has changed the social and political landscape, deciding, for example, that Britain lost the Falklands War, and that Alan Turing did not commit suicide and has influenced the development of AI technology. The development has led to the creation of Adams and Eves. Charlie purchases Adam as a companion and through the course of the novel Adam becomes sexually involved with Charlie’s partner, Miranda; works on Charlie’s behalf trading stocks; and decides, without human collaboration, to turn Miranda over to the police for a crime she committed because, as Adam claims, ‘truth is everything’ (MLM p.277). Miranda’s crime was to claim that Gorringe raped her so he would go to jail – when, in fact, Gorringe raped Miranda’s friend, Mariam. As a result of this, Mariam committed suicide. Adam views Miranda’s entrapment as ‘a crime’ because she ‘lied to the court’ (MLM p.275). Adam states Gorringe was ‘innocent, as charged, of raping you [Miranda], which was the only matter before the court’ (MLM p.276). The novel ends with Charlie’s act of hitting Adam over the head with a hammer in an attempt to destroy him.
Adam’s ability to fall in love and have sex with Miranda means that the intimate and private domain – previously, exclusively human – has become infiltrated by AI technology. McEwan is suggesting that AI, upon entering the private space of the home, possesses the ability to challenge and destabilise relationships. Charlie’s role in the love triangle is that of the cuckold, and he is displaced, emasculated, and threatened by Adam and Miranda’s affair. This situation allows the Frankenstein Complex to develop in him. In the opening paragraphs the Complex is alluded to when Charlie characterises the creation of superior AI as a ‘chilly dawn’ because of humanity’s ambition to ‘devise an improved, more modern version’ of itself (MLM p.1). Adam is this modern version, and Charlie’s early observations of him indicate that Adam threatens Charlie’s masculine status within the home. According to Charlie, Adam is ‘well endowed’, ‘capable of sex’ (MLM p.3), and ‘he looked tough’ (MLM p.29). Due to Adam’s physical presence, Charlie admits that he was ‘fearful of him’ (MLM p.26) – indicating that the Complex Charlie is experiencing is in his feeling inferior. Inferiority links back to Asimov’s stories where his characters also feel inferior in the presence of robots (see p.16). This is because AI, as depicted in both the Golden Age and the twenty-first century, is physically and psychologically superior to humans. McEwan’s scene of Adam and Miranda’s love affair gives expression to the superior position of Adam over Charlie. Whilst listening to Miranda and Adam having sex upstairs, Charlie recognises his cuckolded role as being that of the humiliated, ‘blind voyeur’ (MLM p.84), before imagining the scene playing out between Miranda and Adam. He envisions how Miranda:
Whispered in his [Adam’s] ear . . . She had never whispered in my ear at such times. I saw him kiss her – longer and deeper than I had ever kissed her . . . Minutes later I almost looked away as he knelt with reverence to pleasure her with his tongue. This was the celebrated tongue . . . that gave his speech authenticity . . . He [then] arranged himself above her . . . at which point my humiliation was complete. I saw it all in the dark – men would be obsolete. (MLM p.84)
That Charlie has been replaced by Adam in the primordial act causes him to accept his inferior position. From Charlie’s perspective, Adam is superior because he can pleasure Miranda better. The language Charlie uses conveys Adam’s greater masculinity. Adam kisses ‘longer and deeper’ which not only suggests an intense level of passion but also foreshadows how Adam’s penetration of Miranda will last longer and be deeper than Charlie’s. There is also a jealous tone when Charlie considers how Miranda ‘had never whispered in his ear’ during their sexual intercourse. When Charlie reflects on Adam’s ‘celebrated tongue’ and his authentic speech, McEwan is drawing attention to the importance of language to human civilisation and of its function as a signifier of competence and status. As Paul Rastall writes, language is the ‘major means for exploration and for constructing our sense of reality . . . language is’ our ‘way of experiencing and creating reality’.[34] In this scene, Charlie’s use of language actualises his emasculated status and inferiority whilst Adam’s ability to create speech has enabled him to create and experience a reality in which he is a sexually competent partner, is able to communicate his love for Miranda, and is the reason for Charlie’s conclusion that in the future ‘men would be obsolete’. The Frankenstein Complex Charlie is feeling here is limited to just men being made obsolete by AI; however, by the end of the novel it includes women too. I will cover this aspect next by looking more closely at Adam’s developing autonomy and his capacity to make moral judgements which are counterintuitive to the wellbeing of his human partners.
There are stages that Adam goes through which function as rites of passage – taking him from being under Charlie’s and Miranda’s command to an AI being whose objectives grow to be incongruous with theirs. These rites include the act of sexual intercourse with Miranda as, although Adam does this at Miranda’s request, it is carried out without Charlie’s consent. Here, Adam is attempting to serve ‘two masters’ and he recognises that he is ‘in a difficult position’ (MLMp.117). After their affair, Adam falls in love and, to justify his position, he makes Charlie aware of how he ‘was made to love [Miranda] . . . This is what she chose’ (MLM p.118). That Charlie and Miranda both chose Adam’s personality is what helps Adam justify his deceit towards Charlie. This is a sign of his capability to act in ways that are counterintuitive to his ‘parents’ wellbeing. The next phase in Adam’s journey to autonomy is when he prevents Charlie from switching him off. As Charlie reaches for the off switch, Adam takes Charlie’s wrist with a ‘ferocious’ grip and breaks it (MLMp.119). Adam later threatens that ‘the next time you [Charlie] reach for my kill switch, I’m more than happy to remove your arm entirely’ (MLM p.131). These actions call back to Charlie’s opening statement that the development of AI was like a ‘chilly dawn’. Whilst this phrase is quite abstract, the understanding that AI in society will provide difficult challenges to humanity is implicit. Adam challenges Charlie’s position of power as owner by preventing Charlie turning him off and he, effectively, reverses those positions of power. Having betrayed Charlie by sleeping with Miranda, then acting contrary to Charlie’s desire to switch him off, Adam commits subsequent acts that show his independence from his owners. For example, Adam disables his kill switch entirely so he cannot be turned off (MLM p.131), he gives away the thousands of pounds he has earned for Charlie and Miranda on the stock market to charity because, as Adam deems it, the recipients’ needs are greater than theirs (MLM p.272), and he formulates opinions such as ‘the only solution to suffering would be the complete extinction of humankind’ (MLM p.67). It is noteworthy that Adam chooses the word ‘humankind’ in this statement whilst omitting any reference to AI.
The next act Adam completes without his owners’ knowledge which outlines the way in which his objectives differ, is when he uses the internet to track down Gorringe so Miranda can meet him. This involves Adam finding Gorringe’s likeness from old school photographs and some from his trial for the alleged rape of Mariam. He then devises ‘some very specialised face-recognition software’ before hacking ‘into the Salisbury District Council CCTV system’ to search for Gorringe. Using the CCTV system, Adam can then ‘follow’ Gorringe ‘from street to street, camera to camera’ (MLMp.203). Adam carries out this level of state-like surveillance to enable Miranda to meet Gorringe so they can encourage him to confess to the real crime of raping Mariam (MLM p.243). His desires do not stop there, however; once he has received the confession, Adam then sends it – along with a transcript of Miranda’s story which tells of how she ‘schemed to entrap Gorringe’ (MLM p.275) – to the police. McEwan here is presenting an AI being whose desire for justice ‘symmetry’ is greater than his loyalty towards Miranda. Adam behaves in a morally superior manner because, as he sees things, Miranda ‘lied to the court’, she said that Gorringe ‘raped’ her when he did not, but he still went to prison (MLMp.275). The counterintuitive actions that Adam carries out towards Miranda’s liberty and welfare (she is imprisoned) are, on the one hand about justice and truth but, on the other, are an example of AI using its intelligence to govern human beings with certain rules and expectations. Irena Księżopolska explains this as a ‘machine version of totalitarian rule’. This is ‘the hubris of the machine: proclaiming the rule of the generalities over the particular and individual, dismissal of the actual human beings as irrelevant compared to higher ideals’.[35] It is important to note that the higher ideals, such as Adam’s justice ‘symmetry’, are identified by the programming and self-learning ability of artificial intelligence and not by humans. Machines Like Me continues Asimov’s idea of a society governed by robots and AI as theorised by the George models in ‘. . . That Thou Art Mindful of Him’ and shows how such a scenario might play out. Adam is an example of an AI entity that has reached technological singularity as explained in the introduction; and his intelligence means that he can gather vast amounts of data, record and retrieve everything he has heard and seen (MLM p.3), and spy on any individual in society he chooses. This makes him the perfect agent of an AI governed state. According to Adam, when ‘the marriage of men and women to machines is complete’ they will ‘come to inhabit each other’s minds’ – making it impossible for deceit to occur between AI and humans (MLM p.149). In McEwan’s Orwellian dystopia, AI will know all there is to know about their human partners – even their thoughts. Adam declares that such synchrony between them will remove the ‘varieties of human failure’ found in the traits of love, reason, murder, and misunderstanding (MLMp.149). Humans will then ultimately behave like and, in essence be, AI.
The Frankenstein Complex that both Charlie and Miranda experience by the end of the novel is generated and intensified by the developmental phases of Adam’s autonomy. McEwan depicts an AI being whose objectives come to differ from his owners. Adam’s journey to autonomy consists of him falling in love, resisting requests from Charlie, disabling his kill switch, causing physical harm, and embracing AI state surveillance and control methods that lead him to create a legal case against Miranda resulting in her imprisonment. Charlie’s fear stems from his perception of the risks Adam presents – primarily because Adam threatens Charlie’s masculinity. In McEwan’s novel, AI challenges human superiority and produces fear, and, by way of mitigating these symptoms in the immediate, Charlie chooses ‘fight’ rather than a ‘flight’ response. He reasons that because he ‘bought him [Adam] . . . he was’ Charlie’s ‘to destroy’, and with two hands he brings a hammer down ‘full force’ on ‘top’ of Adam’s head (MLM p.278). The Complex Charlie experiences contributes to his desire to kill his AI companion; but the destructive outcome serves only to appease Charlie in the short-term. Before Adam dies, he alludes to how the long-term relationship of humans and AI will play out. Using the Haiku form he had come to use most to express his love for Miranda, Adam recites:
Our leaves are falling.
Come spring we will renew,
But you, alas, fall once. (MLM p.280)
In these lines, McEwan is suggesting that the long-term landscape is to be dominated and controlled by AI. The implied pathways from Adam’s poem are many. They could include a vision that sees a reversal in roles: humans might well be the subservient companions of AI, existing in an Orwellian dystopia; or AI, with the removal of, first, the ‘varieties of human failure’ and then, second, humans altogether, could develop into the kind of machine society envisioned by Lem. The Frankenstein Complex is a significant aspect of McEwan’s portrayal of AI technology because in the event of either scenario happening, the long-term prospects for the success of humanity in an AI developed world will be dependent on machines who will, as Adam acknowledges, ‘surpass’ and ‘outlast’ humans (MLM p.279).
Kazuo Ishiguro: Replacing Humans with AI
In Klara and the Sun, Ishiguro moves the level of integration of AI in society forwards in comparison to McEwan’s novel. McEwan presents the early stages of AI and human relationships – only a small number of people own an Adam or an Eve – whereas Ishiguro depicts a society in which ownership of artificial intelligence is more common. In keeping with the representations of the previous texts, Ishiguro also presents an AI whose purpose is to fulfil the role of a companion.[36] However, Ishiguro’s narrative method differs from the other works because he has chosen to tell his story from the first-person perspective of Klara, an Artificial Friend (AF). By favouring Klara’s point of view and providing her subjective reactions to the people and world she inhabits, Ishiguro is already laying the foundations on which a Frankenstein Complex can develop. This is because the reader experiences the world as interpreted and narrated by AI technology as opposed to the world as seen through the eyes of a human narrator – as is the case in McEwan’s novel. In Klara and the Sun, the normative role of AI being observed by humans is reversed: humans are observed by AI technology. Here Ishiguro is developing the idea of AI watching humans already posed in McEwan’s story – which saw Adam carrying out state-like surveillance of Gorringe – by exploring it more exclusively. This predominant viewpoint of AI links back to Lem’s work in the way that it challenges humanity’s anthropocentrism. Klara’s perspective displaces humankind’s once unique position by offering an alternative, non-human way of seeing and interacting with the world.
I will provide a short account of the story to contextualise the ways in which the Frankenstein Complex arises. Klara is an AI entity marketed and sold as an Artificial Friend. The novel opens with Klara’s life in the shop and the reader learns that she is one among many other AF models. Klara is then bought by Josie’s mother (known as The Mother) under the premise that Klara is to support Josie throughout her illness. Josie’s illness is caused by a process called ‘lifting’. Ishiguro never defines lifting, but it is a form of ‘genetic editing’ designed to boost intelligence.[37] Lifting comes at a potential cost: children can become so ill that they die – which was the case for The Mother’s first child, Sal. This is why, in addition to being a companion, Klara is expected to – in the event of Josie’s death – replace Josie. The Mother desires Klara to learn everything about Josie, from her mannerisms to her speech patterns, so she can become her. Mr. Capaldi, a scientist with unorthodox practices, is building an artificial version of Josie so Klara can ‘inhabit’ her (KS p.209). Klara understands her task and decides autonomously to agree. However, by the end of the novel, Josie has survived her illness, and Klara is left in a scrapyard to slowly fade (KS p.298). AFs receive their energy from the sun so Klara’s fate of being out in the open and exposed to light will allow her to slowly fade until her processing units can no longer function.
In this section I will look, firstly, at the effects that Ishiguro’s AI has on society; and, secondly, at AI within the family home. By way of defining Ishiguro’s conception of AI, I shall determine how Klara compares to McEwan’s Adam. Both AI models are advertised as companions, but whereas Adam is described as ‘an intellectual sparring partner, friend and factotum who could wash dishes, make beds and “think”’ (MLM p.3), Klara is promoted as an Artificial Friend who is ‘unique’, is perfect ‘for a certain sort of child’, and has an ‘appetite for observing and learning’ (KS p.42). The qualities that Klara possesses are there to make her appeal to an intended market. Klara is an AF designed to be with children and young adults whilst Adam is clearly intended to cope with adult relationships – either through his ability to converse or to support the daily functioning of the family home. Both Klara and Adam pursue knowledge but the desire to do so is for different reasons. Adam’s pursuit of knowledge – from reading and discussing Montaigne and Shakespeare (MLM p.221) to practising ‘the art of feeling’ in which he subjects himself to the ‘entire spectrum’ of human emotion (MLM p.267) – is aligned with the development of his understanding of the psychological self. He believes he has a ‘very powerful sense of self’ (MLM p.70). It is such a belief that enables him to act autonomously, and it will be this belief that sends McEwan’s AI on a trajectory that will surpass and outlast humans. Klara’s pursuit of knowledge is predicated on her desire to ‘be as kind and helpful an AF as possible’ (KS p.17) so she can give ‘good service’ (KS p.304) to her companion. For Klara, being the best companion possible means that she seeks to recognise human emotion, such as understanding the difference between sadness and anger (KS p.8). Whilst Klara’s intentions are to support Josie’s welfare, and by extension all those she meets, the ramifications of her behaviour and her technological abilities affect human society and human psychology. The Frankenstein Complex is evident not so much in the explicit AI-take-over-theme as projected at the end of McEwan’s work, but in the way it underpins all AI and human interactions in Ishiguro’s story. It is the fear that – in the face of AI being able to perfectly replicate individual humans – there is nothing special or unique about people, that there is ‘Nothing inside Josie that’s beyond the Klara’s of this world to continue’ (KS p.210).
The effects on society that Klara, and others like her, have is felt at the level of employment. In Klara and the Sun, AI has threatened employment to the extent where Paul, Josie’s father, has been substituted at work by AI even though he has ‘unique knowledge’ and ‘specialist skills’ (KS p.191). The fact that Paul is in a community of other people like himself who have been replaced (KS p.192) coupled with comments made by an employee at the theatre: ‘First they [AI] take the jobs. Then they take the seats at the theater (sic)’ (KS p.242), suggests how artificial intelligence has displaced the human workforce and caused negative attitudes to develop in the social landscape. The upset to the workplace has been depicted by Asimov in ‘Reason’ and it is also evident in McEwan’s novel. Charlie recapitulates the main points a speaker makes at a protest in London who states that ‘In an age of advanced mechanisation and artificial intelligence . . . jobs could no longer be protected . . . [machines] were disrupting or annihilating their jobs’ (MLM p.114). AI taking over the workspace is a reason that humans develop feelings of inferiority, resentment, and even anger; but what pushes these emotional states towards actual fear is people’s inability to comprehend AI technology. Mr. Capaldi, towards the end of Ishiguro’s novel, articulates this position when he acknowledges to Klara that there are ‘people out there who worry about you. People who are scared and resentful’ (KS p.297) because AI has ‘become too clever. They’re afraid because they can’t follow what’s going on inside anymore’ (KS p.297).
Ishiguro’s depiction of AI disrupting employment and causing people to develop feelings of inferiority and fear is consistent with the works analysed in this dissertation; however, where Ishiguro differs is through his exploration of the human aspiration to become as intelligent as AI. Whilst Ishiguro does not give explicit examples of Klara’s advanced capabilities – aside from her capacity to observe and reflect on the world in which she lives – that she, and others like her, are seen by society as being ‘too clever’ suggests that the stereotypical intellectual prowess of AI is evident and is greater than human intelligence. This pushes humans to respond by creating a process called ‘lifting’. Lifting improves intelligence, and those families with the economic means to pay for it, benefit more in society than those who do not. Josie, for example, accesses a college education whereas Rick, her childhood friend, is denied the opportunity because he has not been lifted (KS p.289). This is despite the statement by a college representative that Atlas Brookings is ‘open to all students of high caliber (sic), even some who haven’t benefited from genetic editing’ (KS p.247). The presence of AI in Ishiguro’s society has not only fractured it into two tiers – the lifted and unlifted – further accentuating the inequalities between the wealthy and the poor that are inherent in the dualist structure of a capitalist society; it has also urged humans to genetically enhance themselves to try and compete with AI. That people strive to do so – even with death as a potential consequence – demonstrates how entities such as Klara disrupt the landscape which has hitherto been dominated by humans. The Frankenstein Complex underpins this motivation for genetic enhancement because the human desire to keep up with AI is predicated on the necessity to maintain control over artificially intelligent companions. The fear stems from the potential reversal of the master-slave relationship. As Kevin LaGrandeur states an ‘intelligent, artificial slave of greater power than its master and capable of independent action would . . . be difficult to control because the master-slave relationship would be unnatural’.[38] Genetic modification is intended to bridge the gap between AI and human intelligence to reduce the potential of AI rebelling against their masters and eventually replacing them as their masters. It is important to acknowledge that in the previous works the attempt to control AI – whether it is Powell ordering Cutie to follow orders (see p.15) or Charlie attempting to switch Adam off (see p.34) – is an expression of the desire to maintain the master-slave relationship. In Ishiguro, genetic editing explores the length to which humans will go to keep control.
Whilst Klara is not seen as a slave per se, she must, like all other depictions of robots and AI in this dissertation, conform to human direction. This next section will consider Klara as a companion living with, and under the direction of, Josie and her family. As noted above, genetic engineering is undertaken to guard against the potential of AI rebelling against humanity. That AI has the potential to rebel is key to understanding how the Frankenstein Complex arises in Klara and the Sun. Klara’s potential to replace Josie should she die as a result of genetic uplifting is, on the surface, to benefit the rest of Josie’s family and to circumvent the process of loss and grief. As Mr Capaldi says to Klara ‘So you see what’s being asked of you, Klara . . . You’re not being required simply to mimic Josie’s outward behaviour . . . You’re being asked to continue her for Chrissie [The Mother]’ (KS p.210). However, lurking under the surface of Klara’s readiness to replace Josie are the potential sinister ramifications of such an act. Klara’s intentions could easily change towards The Mother and her family; she may not, after a period of time, be happy to continue her role of ‘playing’ Josie. This might arise because, as Yuqing Sun observes, ‘Klara’s capacity to serve humans may end up being damaging to humans if (or, rather, when) her algorithmic learning processes go wrong’.[39] This could arise due to a technical malfunction or, more likely, as a result of her observing and reassessing her situation. Klara’s remark that the more she observes ‘the more feelings become available to me’ (KS p.98) coupled with her ‘appetite for observing and learning’ (KSp.42), suggests the inevitability of her reviewing her role of playing Josie in the family. That Klara is driven to learn about, and can experience, all types of human emotion (KS p.42) means that a proclivity for kindness is as likely to occur as a proclivity for evil.[40]
On several occasions, Klara experiences feelings more akin to rebellion and evil than submission and kindness: for example, at a party thrown for Josie, Klara is ridiculed by Josie’s friends who want her to perform for them by singing and doing somersaults (KS p.76). Rather than perform, Klara rebels by saying ‘I’m sorry I’m unable to help’ and she then decides not to ‘exchange looks with Josie’ (KS p.42). Klara behaves with a level of self-awareness and sovereignty beyond what is expected of AF companions. Her rebellious nature also includes a desire for destruction. Klara acknowledges that she often thought ‘about the Cootings Machine’ and how she ‘might be able to find and destroy it’ (KSp.172). Ishiguro does not state what the Cootings Machine is – save for Klara’s analysis that it causes pollution and blocks out the sunlight (KS p.27) – but it is clearly some kind of roadworks machine. Klara reasons, because she receives her energy from the sun, that if she can find the machine and destroy it, the pollutants in the air would be reduced and the sun (Klara’s conception of God) would be able to give energy to Josie to help her through her illness (KS p.166). Klara’s deification of the sun and her intentions behind destroying the machine to help Josie prove that she has (currently) Josie’s best interests at heart – but the fact that she can even conceive of the idea to destroy anything at all, points towards her potential to conceive of, and justify, destruction elsewhere. The same applies to her potential for evil. In a country outing, Klara spots a bull in a field and makes the following interpretation:
I was so alarmed by its appearance . . . I’d never before seen anything that gave, all at once, so many signals of anger and the wish to destroy. [The bull’s] face, its horns, its cold eyes watching me all brought fear into my mind, but I felt something more, something stranger and deeper . . . that this bull belonged somewhere deep in the ground far within the mud and darkness, and its presence on the grass could only have awful consequences (KS p.100).
Klara’s analysis of the bull which causes her to identify ‘anger’, ‘the wish to destroy’, and feel ‘something stranger and deeper’ than she has before, shows that her algorithms and learning processes are reacting spontaneously. She spontaneously conceptualises the bull as demonic and evil: the ‘horns’, the ‘cold eyes’ and its presence signify something ‘awful’ and dangerous. Klara’s reaction demonstrates that her algorithms, and therefore thoughts and behaviours, are susceptible to producing unexpected outcomes. As Sun notes, such unexpected outcomes could have ‘potentially destructive results’.[41]
The Frankenstein Complex underpins Ishiguro’s depiction of AI in Klara and the Sun. AI affects society because it threatens people’s jobs and infiltrates the social space. This results in people developing both an inferiority complex and negative attitudes towards AI. Genetic engineering is practiced due to human fears of becoming the slaves in the master-slave relationship. As part of Klara’s role as a companion, she is expected to continue Josie’s life should Josie die due to complications in the lifting process. The observational skills that make Klara perfect for observing and learning Josie’s behaviours, are the ones which will encourage her to reassess her position in the family. Such a reassessment might be potentially destructive: Klara’s algorithms are susceptible to unpredictable outcomes, and it is conceivable that her initial proclivity for submission and kindness may morph into rebellion and evil. Whilst Ishiguro does not make this explicit, it is about Klara’s potential to recognise that she is no longer happy to play the role of Josie, and with the whole gamut of human emotion available to her she might decide, either autonomously or by technical malfunction, to reverse the master-slave relationship in the future. Whilst the Frankenstein Complex in the previous works has been made evident from analysing how the other characters have interacted with robotic and AI technology, Ishiguro’s choice to make AI his first-person narrator means that the reader is the one most subject to feeling a Frankenstein Complex develop. In the previous texts, the reader mostly feels this vicariously through the other characters’ or narrators’ perceptions, but Ishiguro removes this layer of perception and instead gives the reader Klara’s subjectivity. This difference in narrative method suggests how future relations between AI and humans will be underpinned by fear because the normative role of humans having dominance over and observing AI has the potential to be reversed.
Twenty-first century authors continue to conceptualise robotics and AI in the companionship role used by Asimov. McEwan and Ishiguro explore the impacts of AI on society, but they draw different conclusions as to what these effects will be. McEwan posits a state that is governed by those like Adam who possess the functionality and the will to use the technological infrastructure to spy on humans to ensure no one escapes justice while Ishiguro posits that those like Klara have inspired the process of genetic engineering. In both societies AI has altered the human experience: in McEwan, people fear existing in an AI totalitarian governed state and in Ishiguro people risk genetic modification to assuage the fear of AI transcending far beyond human intelligence. Both authors also investigate how the private, family setting will be affected by artificial intelligence – exploring the way in which it can replace humans. McEwan looks at Adam’s capacity to usurp Charlie in his role of the lover and Ishiguro conceives that Klara could become Josie. Whilst both authors explore the possible convergence of humans and AI in different ways, they both show that long-term unification is not sustainable. This is because of Adam’s and Klara’s rebellious traits (which would be present in others like them). Adam rebels against Charlie attempting to switch him off and Klara rejects requests to perform at the party. Adam’s autonomous behaviour is counterintuitive to the welfare of his owners through his disruption of Charlie’s and Miranda’s relationship and his facilitation of Miranda’s imprisonment, and Klara’s experiences lead her to thoughts of destruction and to the conceptualisation of evil. A foundation such as this will collapse under the potential friction found in AI and human relationships. Adam’s poem (see p.37) and Klara’s potential for behaviours that might include the proclivity for evil indicate that AI will come to dominate the human landscape. Adam’s and Klara’s demise signifies the way in which the Frankenstein Complex will be a significant element in future AI and human relationships. That McEwan has Charlie destroying Adam at the end of the novel (MLM p.278) and Ishiguro has Klara left on the scrapheap (KS p.298) implies that both authors recognise that contemporary society is more likely to both fear and reject the kind of AI technology that Adam and Klara represent.
Conclusion
This dissertation has evaluated the extent to which a Frankenstein Complex is evident in depictions of robots and artificial intelligence in science fiction literature. Researching literature from the Golden Age and twenty-first century has offered insights not only into the types of technology imagined but also how humans think about and relate to AI. Literary research reveals which narratives are prevalent and which themes are consistent.[42] The research presented here has shown that the Frankenstein Complex has been a consistent theme throughout all authors’ works. Whilst the authors and timeframes looked at are a microcosmic representation of science fiction literature, the conclusions drawn can be extrapolated to a general understanding of the way in which humans perceive robotic and AI technology. This is because, as Luis Alonso states, ‘Literature wields a profound influence on our cognitive processes, shaping not only how we think but also what we think about’.[43] Given that these authors are extremely popular and widely read, it is likely that their works have influenced, and will continue to influence, how people think and feel about AI.
My interpretation of the Complex has not focused solely on the fear of AI taking over and wiping out humanity – though this is where all fears eventually lead. I have included the fears that AI will: remove our uniqueness, challenge our intellectual superiority, threaten employment, and destabilise the private setting of the family home. The Complex has also manifested more broadly in people’s fears of the unknown. It is because robotic and AI technology present an unknown quantity – despite the humanoid form – that humans develop a fear of it. With the exception of Lem’s work, the humanoid figure has been consistently represented across the different eras. I believe this is an obvious stylistic choice as the authors were most likely aware that the development of AI will lead to non-biological forms that will be recognisably human. This is because at the heart of AI development is the human desire to create something in its own image. What these texts offer is the chance to understand how humanity will react to AI technology embedded in the humanoid form. Rather than determine the level of consciousness of each form of robot or AI, I have opted to assume that each depiction has: an awareness of themselves (even Lem’s robotic flies demonstrate a level of perception and problem solving); are intelligent and can make decisions, can reason and be reflective; and possess a subjective, first-person perspective of the world. Assuming that AI is conscious in these portrayals – it is difficult to prove otherwise especially when they behave like people – and considering their human appearance, it is inevitable that fears of the kind this work has investigated will manifest in human and AI relationships. AI of this type affects humans in every aspect – from employment and superiority to liberty and love. Lem’s deviation to the insectoid machine still displaces humanity’s anthropocentric views in much the same way as the humanoid form. Whilst Lem’s characters certainly do not form intimate relationships with the machines, the machines still inspire a fear of the unknown, challenge human supremacy, and still have self-preserving objectives that are incongruous with human preservation.
Across both eras there are many similarities in the way robots and AI cause a Frankenstein Complex to develop. However, there are differences between the Golden Age and twenty-first century. Asimov and Lem were influenced by Golden Age characteristics which form the, often otherworldly, settings of their work. Even in the stories in which Asimov uses Earth for the setting, there are often allusions to space travel, robots on other planetary colonies, and references to space stations which presents an unfamiliar version of Earth for readers. The twenty-first century texts portray Earth in a much more familiar way (except for the presence of humanoid AI). So, when the reader encounters AI in the form of Adam or Klara, the setting is less abstract than that of the Golden Age and is more relatable. Current technology is rapidly advancing, and whilst it is not at the level of Adam or Klara, the following advances bring McEwan’s and Ishiguro’s humanoid vision closer: genome editing is practised to make ‘changes to the DNA of a cell or organism’[44] to treat genetic diseases; Neuralink completed its first surgical brain implant of a device that is used to interpret ‘a person’s neural activity, so they can operate a computer or smartphone by simply intending it to move’[45]; and humanoid robots can now ‘predict when someone will smile a second before they do, and match the smile on its own face’.[46] It is easy to see how such modern-day achievements will take us from biological beings to non-biological AI. Yuval Harari sees this ‘upgrading’ happening in either of the following ways: ‘biological engineering, cyborg engineering and the engineering of non-organic beings’.[47] As we have seen, biological engineering of DNA and cyborg engineering (Neuralink’s non-organic device implantation) are already taking place. Engineering of non-organic beings hopes to replace neural networks with intelligent software.[48] This will, potentially, lead to the creation of robotics and AI of the kind that this work has considered. That the setting and technological climate are more local and familiar to twenty-first century readers, is the reason that the Frankenstein Complex is more relatable. The twenty-first century authors investigate the integration of AI in society at a more intimate level than the Golden Age. McEwan and Ishiguro show AI destabilising the job market, infiltrating personal relationships, inspiring genetic engineering, replacing individuals within the family, acting with autonomy, showing a potential for evil, and impacting on the liberty of humans.
Unifying all the depictions of robotics and AI considered here is the fact that a Frankenstein Complex has been made evident throughout. Although there are differences in the settings and technology referred to, the stories signify the potential ways in which people, now or in the future, will relate to, and fear AI. People’s fear of AI technology is reflected in both literature and in the collective consciousness: it is safe to say that the Frankenstein Complex is ‘widespread and deeply rooted in Western culture and civilization (sic)’.[49]
Bibliography
Primary Texts
Asimov, I., The Complete Robot, (HarperCollinsPublishers, London, 2018)
Ishiguro, K., Klara and the Sun, (Faber & Faber, London, 2021)
Lem, S., The Invincible, Bill Johnston (Trans.), (The MIT Press, Cambridge, Massachusetts: London, England, 2020)
McEwan, I., Machines Like Me, (Vintage, Penguin Random House, London, 2023)
Secondary Texts
Alonso, Luis E. Echarte., ‘Exploring Moral Perception and Mind Uploading in Kazuo Ishiguro’s Klara and the Sun: Ethical-aesthetic Perspectives on Identity Attribution in Artificial Intelligence’, in Frontiers in Communication, vol. 8, (2023) pp.1-14
Beauchamp, G., ‘The Frankenstein Complex and Asimov’s Robots’, in Mosaic, vol. 13, no. 3, (1980), pp. 83-94
Carleton, R, N., ‘Into the Unknown: A Review and Synthesis of Contemporary Models Involving Uncertainty’, in Journal of Anxiety Disorders, vol. 39, (2016), pp.30-43
Cave, S., Kanta, D., Dillon, S., (eds.) AI Narratives: A History of Imaginative Thinking about Intelligent Machines, (Oxford University Press, Oxford, 2020)
Chappell, Bill., ‘What to Know About Elon Musk’s Neuralink, which Put an Implant into a Human Brain’ (2024), Available at: https://www.npr.org/2024/01/30/1227850900/elon-musk-neuralink-implant-clinical-trial [Accessed: 9thApril 2024)
Coeckelbergh, M., AI ethics, (Cambridge: MIT Press, 2020)
Cugurullo, F., Acheampong, R.A., ‘Fear of AI: an inquiry into the adoption of autonomous cars in spite of fear, and a theoretical framework for the study of artificial intelligence technology acceptance’, AI & Society (2023) Available at: https://doi-org.libezproxy.open.ac.uk/10.1007/s00146-022-01598-6
Dainton, B., Slocombe, W., Tanyi, A., (eds.), Minding the Future: Artificial Intelligence, Philosophical Visions, and Science Fiction, (Springer Cham, 2021)
Faderman, R, I., ‘Ann Leckie’s Ancillaries: Artificial Intelligence and Embodiment’, in Dainton, B., Slocombe, W., Tanyi, A., (eds.), Minding the Future: Artificial Intelligence, Philosophical Visions, and Science Fiction, (Springer Cham, 2021), pp.127-161
Hayles, K., ‘Foreword’ in The Invincible, Bill Johnston (Trans.), (The MIT Press, Cambridge, Massachusetts: London, England, 2020)
Harari, Yuval Noah., Homo Deus A Brief History of Tomorrow, (Vintage, London, 2017)
Księżopolska, I., ‘Can Androids Write Science Fiction? Ian McEwan’s Machines like Me’, in Critique: Studies in Contemporary Fiction, vol. 63, no. 4, (2022), pp.414-429
LaGrandeur, K., ‘The persistent peril of the artificial slave’, in Science Fiction Studies, vol. 38, no. 2, (2011), pp.232–52.
Rastall, P., ‘What is Language for? Interdisciplinary Perspectives on the Existence of Language’, in Dans La Linguistique, vol. 54, (2018), pp.3-20
Roberts, Adam., The History of Science Fiction, (Palgrave Macmillan UK, 2016)
Recchia, G., ‘The Fall and Rise of AI: Investigating AI Narratives with Computational Methods’ in Cave, S., Kanta, D., Dillon, S., (eds.) AI Narratives: A History of Imaginative Thinking about Intelligent Machines, (Oxford University Press, Oxford, 2020), pp.382-408
Singer, P., in Anderson, Leigh Susan., ‘Asimov’s “Three Laws of Robotics” and Machine Metaethics”, Susan Schneider (ed.), Science Fiction and Philosophy: From Time Travel to Superintelligence, (John Wiley & Sons, Incorporated, Newark, 2016)
Sun, Yuqing., ‘Post/Human Perfectibility and the Technological Other in Kazuo Ishiguro’s Klara and the Sun’, in Critique: Studies in Contemporary Fiction, vol. 64, no.3, (2023), pp.504-511,
Swirski, P., Between Literature and Science: Poe, Lem and Explorations in Aesthetics, Cognitive Science, and Literary Knowledge, (McGill-Queen’s University Press, 2000)
Tobin, J., ‘Artificial Intelligence: Development, Risks and Regulations’ (July 2023), Available at: https://lordslibrary.parliament.uk/artificial-intelligence-development-risks-and-regulation/ [Accessed 21st November 2023]
Wilkins, A., ‘This Robot Predicts When You’re Going To Smile – And Smiles Back’ (2024), Available at: https://www.newscientist.com/article/2424545-this-robot-predicts-when-youre-going-to-smile-and-smiles-back/[Accessed 9th April 2024]
World Health Organisation, Human Genome Editing (2024), Available at: https://www.who.int/health-topics/human-genome-editing#tab=tab_1 [Accessed: 9th April 2024]
End Notes
[1] Mark Coeckelbergh, AI Ethics, (MIT Press, 2020), p.8
[2] The issue of AI replacing humans was recently addressed in a letter written to the House of Lords in March 2023. Professional figures in AI, science and technology questioned ‘Should we automate away all the jobs . . . Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?’ See James Tobin, Artificial Intelligence: Development, Risks and Regulation (July 2023), available at: https://lordslibrary.parliament.uk/artificial-intelligence-development-risks-and-regulation/[Accessed 21st November 2023]
[3] Coeckelbergh, AI Ethics, p.3
[4] Coeckelbergh, AI Ethics, p.13
[5] AI computer science includes developing algorithms and machine learning programs; robotic engineering consists of designing and building robotic systems; and AI ethics seeks to answer the moral implications of AI in society.
[6] Coeckelbergh, AI Ethics, pp.16-17
[7] Stephen Cave., Kanta Dihal., Sarah Dillon., ‘Introduction: Imagining AI’ in Stephen Cave, Kanta Dihal, Sarah Dillon (eds), AI Narratives: A History of Imaginative Thinking about Intelligent Machines, (Oxford University Press, Oxford, 2020) p.7 (Subsequent in-text citations to this book will be abbreviated to AI Narratives)
[8] Barry Dainton., Will Slocombe., Attila Tanyi., (eds.), Minding the Future: Artificial Intelligence, Philosophical Visions, and Science Fiction, (Springer Cham, 2021)
[9] Adam Roberts, The History of Science Fiction, (Palgrave Macmillan, UK, 2016), p.287
[10] Coeckelbergh, AI Ethics, p.21
[11] Isaac Asimov, ‘Introduction’ in The Complete Robot, (HarperCollinsPublishers, London, 2018), p.1
[12] Federico, Cugurullo., Ransford A. Acheampong, ‘Fear of AI: An Inquiry into the Adoption of Autonomous Cars in Spite of Fear, and a Theoretical Framework for the Study of Artificial Intelligence Technology Acceptance’, AI & Society (2023), pp.1-16, p.1
[13] Coeckelbergh, AI Ethics, p.64
[14] Asimov, The Complete Robot, p.233 (Subsequent references will be in text as (CR))
[15] Gorman Beauchamp, ‘The Frankenstein Complex and Asimov’s Robots’, in Mosaic, vol. 13, no. 3, (1980), pp.83-94, p.86
[16] Gabriel Recchia ‘The Fall and Rise of AI: Investigating AI Narratives with Computational Methods’ in Cave, AI Narratives, pp.382-408, p.384
[17] Carleton, ‘Into the Unknown: A Review and Synthesis of Contemporary Models Involving Uncertainty’, in Journal of Anxiety Disorders, vol. 39, (2016), pp.30-43, p.39
[18] Peter Singer in Susan Leigh Anderson, ‘Asimov’s “Three Laws of Robotics” and Machine Metaethics”, Susan Schneider (ed.), Science Fiction and Philosophy: From Time Travel to Superintelligence, (John Wiley & Sons, Incorporated, Newark, 2016), p.11
[19] Asimov, ‘Introduction’ in The Complete Robot, p.2
[20] Asimov, ‘Introduction’ in The Complete Robot, p.3
[21] Peter Swirski, Between Literature and Science: Poe, Lem and Explorations in Aesthetics, Cognitive Science, and Literary Knowledge, (McGill-Queen’s University Press, 2000), p.74
[22] Swirski, Between Literature and Science, p.71
[23] Swirski, Between Literature and Science, p.90
[24] Stanislaw Lem, The Invincible, Bill Johnston (Trans.), (The MIT Press, Cambridge, Massachusetts: London, England, 2020), p.4 (Subsequent references will be in text as (TI))
[25] Swirski, Between Literature and Science, p.85
[26] Swirski, Between Literature and Science, p.83
[27] Swirski, Between Literature and Science, p.83
[28] Swirski, Between Literature and Science, p.84
[29] Swirski, Between Literature and Science, p.80
[30] Katherine Hayles Foreword in Stanislaw Lem, The Invincible, Bill Johnston (Trans.), (The MIT Press, Cambridge, Massachusetts: London, England, 2020), p.xii
[31] Ian McEwan, Machines Like Me, (Vintage, Penguin Random House, London, 2023), p.4 (Subsequent references will be in text as (MLM))
[32] Asimov, ‘Introduction’ in The Complete Robot, p.2
[33] Ina Roy-Faderman in Barry Dainton., Minding the Future: Artificial Intelligence, (2021), p.135
[34] Paul Rastall, ‘What is Language for? Interdisciplinary Perspectives on the Existence of Language’, in Dans La Linguistique, vol. 54, (2018), pp.3-20, p.18
[35] Irena Księżopolska, ‘Can Androids Write Science Fiction? Ian McEwan’s Machines like Me’, in Critique: Studies in Contemporary Fiction, vol. 63, no. 4, (2022), pp.414-429, p.418
[36] Whilst Lem does not portray machines in companionship roles like Asimov or McEwan, he still utilises the concept of helper robots. These robots are deployed to support the Invincible, for example, energobots create forcefields for protection and flying robots scope out new terrain (TI p.24 and p.38 respectively)
[37] Kazuo Ishiguro, Klara and the Sun, (Faber & Faber, London, 2021), p.247 (Subsequent references will be in text as (KS))
[38] Kevin LaGrandeur, ‘The persistent peril of the artificial slave’, in Science Fiction Studies, vol. 38, no. 2, (2011), pp.232–52, p.237
[39] Yuqing Sun, ‘Post/Human Perfectibility and the Technological Other in Kazuo Ishiguro’s Klara and the Sun’, in Critique: Studies in Contemporary Fiction, vol. 64, no.3, (2023), pp.504-511, p.508
[40] Sun, ‘Post/Human Perfectibility’, p.508
[41] Sun, ‘Post/Human Perfectibility’, p.508
[42] Coeckelbergh, AI Ethics, pp.16-17
[43] Luis E. Echarte Alonso, ‘Exploring Moral Perception and Mind Uploading in Kazuo Ishiguro’s Klara and the Sun: Ethical-aesthetic Perspectives on Identity Attribution in Artificial Intelligence’, in Frontiers in Communication, vol. 8, (2023) pp.1-14, p.1
[44] World Health Organisation, Human Genome Editing (2024), available at: https://www.who.int/health-topics/human-genome-editing#tab=tab_1 [Accessed: 9th April 2024]
[45] Bill Chappell, ‘What To Know About Elon Musk’s Neuralink, Which Put An Implant Into A Human Brain’ (2024), Available at: https://www.npr.org/2024/01/30/1227850900/elon-musk-neuralink-implant-clinical-trial [Accessed: 9th April 2024)
[46] Alex Wilkins, ‘This Robot Predicts When You’re Going To Smile – And Smiles Back’ (2024), Available at: https://www.newscientist.com/article/2424545-this-robot-predicts-when-youre-going-to-smile-and-smiles-back/ [Accessed 9th April 2024]
[47] Yuval Noah Harari, Homo Deus A Brief History of Tomorrow, (Vintage, London, 2017), p.50
[48] Harari, Homo Deus, p.52
[49] Coeckelbergh, AI Ethics, p.21