This is The Church and AI, a newsletter exploring both the practical and philosophical realities of AI and the Christian faith. Currently, I’m a few chapters into writing what is possibly my first book. Unfortunately, my current responsibilities mean that finishing it in the timeline I would have liked is just not possible.
My hope is that my schedule will lighten a little in 2024 in order to finish it, but until then, I felt it would be fun—and hopefully helpful—to share some of the content, as there is really no need (other than personal pride) to squirrel it away. Below is the first draft of the first chapter; your feedback would be very gratefully received.
CHAPTER 1: A BRIEF HISTORY OF AI
Almost all of us will approach the concept of artificial intelligence with considerable bias. Perhaps we conjure up images of nefarious He-Who-Must-Not-Be-Named-Like supervillains set on world domination and mass destruction. Or maybe we see AI as the digital white knight in shining armor, here to usher in an almost-but-not-quite heavenly utopian fantasy.
The irony is that with this digital technology, binary is not a helpful approach to understanding it.
As with most well-written characters, artificial intelligence is far from two-dimensional. It transcends the boundaries of good and evil, of black and white. It’s a multidimensional reality that defies simple classification as much as we desire it.
Artificial intelligence and its future is complex, nuanced, and enigmatic, dependent on innumerable factors and considerations. Just as the backstory is vital to any character with depth, history makes all the difference in understanding AI.
For that reason, to understand the current state of AI—which will most likely be outdated when you read these words—it’s helpful to dig into AI’s history. You might be disappointed if you’re looking for a comprehensive, play-by-play analysis of this technology’s rise. Our purpose is to finish this chapter with a broad understanding of some key moments that will help us contextualize some of the enormous changes that have taken place in the last decade and move towards a healthy, considered response as the body of Christ.
Strangely, our journey towards understanding AI begins not in the laboratories of tech giants like Google and Meta, but in the mists of time, with a 2700-year-old Mediterranean myth. As we uncover the layers of AI’s history, we’ll discover how humanity’s obsession with intelligent machines has led us to this challenging juncture on the world’s timeline.
Dreams of the First Robots
As far as we can tell, it begins south of the Greek mainland, on the largest of the 6,000 islands and islets from which the nation is comprised. This is the island of Crete, bordered by crisp white beaches and a sea that gleams the sort of shimmering turquoise that makes you wonder if the seabed is covered with those rare, precious stones that bear the same name. It’s many centuries before the birth of Jesus, and as the sun sets, the smell of freshly caught fish sizzling on the fire wafts enticingly across the dry, sun-scorched hills. Imagine a dull rhythmic thump that echoes in the distance, subsides, and returns a few seconds later; too repetitive to be thunder but too loud to be anything of human origin. To the people of Crete, this is just a part of life, but to those encroaching would-be invaders watching from the horizon of the Mediterranean Sea, those distant rumbles bring with them unimaginable terror.
They are the hulking footsteps of Talos.
According to Greek mythology, Talos was a bronze automaton fashioned by Hephaestus, fused with an magical substance called ichor and made in man’s image. It was the size of New York’s iconic Statue of Liberty, and bore the sole objective of protecting the island from encroaching invaders. The giant—it was said—lurched with considerable menace around Crete’s picturesque 1,046km (650 mile) perimeter, ready at a moment’s notice to hurl chariot-sized boulders at any ships who dared come within his mighty range.
Of course, we know that Talos was nothing more than a myth, but there’s more to this story than meets the eye. According to Stanford scholar Adrienne Mayor in her book Gods and Robots, this is just one of many stories about “animated statues and self moving devices” found in Greek antiquity.1 She suggests that in Talos, humanity may have found one of the earliest conceptions of artificial intelligence. Little more than a spark—a whiff, a dream—but there nevertheless. Mayer explains:
The exact definition of the term robot is debateable, but the basic conditions are met by Talos: a self-moving android with a power source that provides energy, “programmed” to “sense” its surroundings and possessing a kind of “intelligence” or way of processing data to “decide” to interact with the environment to perform actions or tasks.2
Or what about Pandora, Eve’s echo? Where the latter was created by a loving and good God, the former is a mythical being, another creation of Hephaestus (who seems to enjoy creating not-quite-humans), shaped from clay, gifted with a forbidden and not-to-be-opened box that contained all the world’s evil. Perhaps, Mayor suggests, this Pandora of Greek mythology was also an early conception of artificial intelligence. She believes that Hesiod’s original description of Pandora reveals “an artificial, evil woman built by Hephaestus and sent to Earth on the orders of Zeus to punish humans for discovering fire” and whose primary objective was to “infiltrate the human world and release her jar of miseries.”3
In the murky mythological world of the Greeks, as Mayor readily admits, it is difficult to know how these ancient people viewed these self-moving, seemingly self-thinking entities. Was Talos the world’s first visualization of a robot? Was he sentient? Did he have a soul? Was he viewed as a technological machination or a marvel of magic?
We don’t know.
But what we do know is this.
When Jesus began his ministry, stories of “android[s] with encoded instructions to carry out complex activities” would have been familiar in the society in which he and his disciples lived.4
In other words, Jesus would probably have known the tales of Talos and Pandora.
It’s worth noting, then, that Jesus could well have utilized Gentile mythology as a jumping-off point to prime his followers for the potentially existential realities of AI that lurked in a distant millennium.
He might have said something like, “Dear friends, you know Talos and Pandora from Greek mythology? One day, something like this will be a reality. . . here’s what you should do.”
But he didn’t. And really, that’s a conversation for much further into this book.
Here’s the point: some semblance of artificial intelligence as an idea has hovered around the shores of Crete and the cloudy world of the human imagination for centuries. However, the quest to actually achieve artificial intelligence ultimately began in the 1930s with a young British mathematician named Alan Turing.
Alan Turing
Turing was born in Maida Vale, England, on June 23, 1912, and his short life was defined by exceptional ability. He was not quite ten when his teachers were beginning to recognize the prodigious talent of the child, describing him as something of a genius.5 This early recognition of his considerable abilities would set the stage for a short-lived but remarkable career in the world of computer science.
At King’s College, Cambridge, Turing flourished. Indeed, his intellectual gifting was such that, after a lunch meeting with John Maynard Keynes, the prominent economist wrote to his wife, “I had to lunch today the Fellowship candidate who seems much the cleverest on paper. . .He is excellent – there cannot be a shadow of doubt. . .Turing is his name.”6 In 1936, at just twenty-four years old, he authored a groundbreaking paper titled On Computable Numbers, with an Application to the Entscheidungsproblem. Despite its somewhat unremarkable title, this paper revolutionized the world of computing by introducing the concept of universality in relation to computation.7
Universality (as defined by Stuart Russell, a leading computer scientist in the present-day field of AI) refers to the idea that a single machine could be designed to perform a multitude of tasks, such as arithmetic, machine translation, chess, speech understanding, and animation. In other words, “one machine [could do] it all.”8 Prior to Turing’s work, computers were often designed for specific tasks, and creating a new machine for each problem was cumbersome and impractical. Turing’s notion of universality laid the foundation for the development of general-purpose computers that could be programmed to handle diverse tasks efficiently.
The essence of Turing’s insight lay in his proposal of what would eventually come to be known as a “universal Turing machine.” He demonstrated that any computational task that could be described algorithmically could be executed by this hypothetical machine. In other words, he recognized that, in principle, intelligence could be artificially simulated. This idea was groundbreaking and changed the way people thought about computation, eventually paving the way for the personal computers and programming languages that define the digital in which we live today.
We can only imagine what Turing would have thought of today’s latest smartphones!
The contributions of this young mathematician’s gifted mind did not end there. After almost two years of walking the Collegiate-Gothic architecture of Princeton University, Turing lent his gifts to a team of cryptanalysts at Bletchley Park, where he focused on cracking German codes during the Second World War. His work at Bletchley Park, along with the efforts of other codebreakers, had a significant impact on the outcome of the war and would eventually be depicted in the 2014 film, The Imitation Game, which starred Benedict Cumberbatch and Keira Knightly, introducing his achievements to a broader audience.
After the war, Turing continued his exploration of artificial intelligence. Fourteen years after Turing’s On Computable Numbers, with an Application to the Entscheidungsproblem and fresh from his incredible feats of codebreaking genius, Turing published another seminal paper, On Computing Machinery and Intelligence. In this work, he conceptualized the imitation game and what would eventually become known as the Turing Test, an evaluation of a machine’s capacity to demonstrate intelligent behavior at a level comparable to—or indiscernible from—that of a human being.9
The Turing Test created a critical juncture in AI’s story because—in theory, at least—it provided a practical, objective method for measuring the intelligence of machines. Turing’s proposal captured the imagination of the public and ignited enthusiasm for the possibility of creating machines that could mimic human intelligence. Suddenly, the tales of Talos and Dorothy’s Tin Man in 1939’s The Wizard of Oz were no longer pure science fiction; there was a sense of tangible propulsion toward the ultimate goal of putting artificial intelligence within reach of scientific reality.
Turing’s visionary ideas and groundbreaking research laid the groundwork for the digital age and the ongoing pursuit of artificial intelligence. His legacy continues to inspire researchers and engineers in their quest to create ever-more sophisticated machines capable of learning, reasoning, and interacting with the world in ways that were once deemed solely human attributes. Today, Turing is rightfully celebrated as one of the pioneers of modern computing and artificial intelligence, and his contributions have shaped how we live, work, and interact with technology.
The 1950s and 1960s
Despite Alan Turing’s obvious intellectual prowess and monumental contributions to computer science, his life was marked by inner struggles, and he tragically took his own life shortly before his 42nd birthday, only a few years after publishing On Computing Machinery and Intelligence. Nonetheless, his work and ideas sparked something of a surge of activity in the field of artificial intelligence during the remainder of the 1950s.
Turing’s work was followed in 1956 by the Dartmouth Summer Research Project, a pivotal moment in the history of artificial intelligence. In the hallowed halls of this esteemed Ivy League New Hampshire College, John McCarthy, the Assistant Professor of Mathematics, along with Marvin Minsky, Nathaniel Rochester and Claude Shannon, submitted a proposal for a project which sought to discover “how to make machines use language, form abstractions and concepts, solve [the] kinds of problems now reserved for humans, and improve themselves.”10 Building upon Turing’s ideas, the Dartmouth Summer Research Project acted as a catalyst, propelling artificial intelligence into the forefront of scientific exploration.
In 1957, just one year after the project at Dartmouth College, Sir Julian Huxley published an essay that, for the first time, introduced transhumanism as a concept. Transhumanism emerged as a significant idea that demanded the attention of society and will increasingly require the attention of the Church. At its core, transhumanism seeks to leverage scientific and technological advancements to augment human capabilities and, in its most radical form, transcend the limitations of the human body by uploading consciousness into a digital medium. By doing this, proclaimed Huxley, “the human species will be on the threshold of a new kind of existence . . . consciously fulfilling its real destiny.”11
Interestingly, the concept of transhumanism wasn’t entirely new. Julian Huxley’s brother, Aldous, had explored similar things in his 1932 dystopian classic, Brave New World, offering early glimpses into a future where science and technology could profoundly influence human development.
As the 1960s unfolded, the scientific and philosophical communities began grappling with the notion of superintelligence. Among those engaged in this discussion was Irving John Good, a British mathematician, scientist, and former colleague of Alan Turing at Bletchley Park. In 1966, Good introduced the term ultraintelligence to the world—a precursor to the concept of superintelligence—and delved into the profound implications of machines potentially surpassing human cognitive abilities.
Good’s influential paper, Speculations Concerning the First Ultraintelligent Machines, predicted that the rise of ultraintelligence would “unquestionably” lead to an “intelligence explosion, [where] the intelligence of man would be left far behind.”12 This notion raised important questions about the potential consequences and ethical considerations surrounding the development and control of artificial superintelligence.
The 1950s and 1960s marked a pivotal period in the development of artificial intelligence, shaping the very foundations of the field as we know it today. In the fast-paced and sensationalized landscape of the 21st Century, it is all too easy to succumb to doom-mongering, exaggerated claims and conspiracy theories driven by the relentless algorithms of social media and the never-ending pursuit of ad revenue over objectivity. As we delve deeper into the potential future of AI, you may encounter concepts and predictions that seem far-fetched, tempting you to dismiss the potential of this technology as nothing more than a ridiculous fantasy.
I can certainly understand such skepticism; indeed, there is something refreshingly human about it. However, dear reader, if possible, I would encourage you to recognize that some of the finest minds of past generations foresaw the potential issues associated with artificial intelligence long before it was anything more than a figment of the imagination and all during a time when sensationalist media held considerably less sway over intellectual discourse.
When we explore the realms of artificial general intelligence (AGI), superintelligence, and transhumanism, we are not simply chasing after the latest passing trend. Rather, we are grappling with issues that have been at the forefront of intellectual discourse for many decades and multiple generations. These are anything but fads; they are enduring considerations that have challenged humanity for a considerable time. The difference is that now, what was once the realm of imagination is on the verge of becoming reality.
The AI Winters
Conceptual developments and ideas around artificial intelligence had advanced somewhat quickly during the 1950s and 1960s, but progression slowed drastically in the 1970s, an era that witnessed the onset of what is now commonly referred to as the “first AI winter.” During this period, the field faced significant challenges that hindered its progress, including technological limitations and a lack of funding, leading to a general sense of inactivity and stagnation until around 1980.13
Pioneers of AI research had worked tirelessly to explore various approaches to developing systems that could match and possibly surpass human capabilities. However, in their zeal for success, researchers made many bold but unfulfilled predictions, and their support suffered for it. While these early attempts at creating AI systems showed potential, they were far from achieving what they had set out to accomplish. Primitive artificial intelligence could not cope with ambiguity or uncertainty, producing underwhelming results and making them unsuitable for handling real-world problems effectively.
It turns out that unintelligent artificial intelligence is a tough sell.
Furthermore, the computational power available at the time was limited, severely hampering the development of sophisticated AI algorithms. As a result, governments and corporations became skeptical about supporting a field that seemed to be offering little in the way of practical progress, and funding dried up.
Evidently, the world was not quite ready for artificial intelligence, and the first AI winter set in.
For almost a decade, cold disinterest gripped the field like the frigid bite of a freeze on the Canadian Prairies. After what must have seemed like an age to those passionate about artificial intelligence, the early 1980s brought about crucial developments that rekindled interest in AI and set the stage for its eventual resurgence. One of the most significant factors that thawed cooling interest in the field was the rise of personal computers (PCs). In one of the most important events in computer history, IBM released the personal computer in 1981, and two years later, Apple introduced the first mass-marketed computer with a graphical user interface (or GUI). While the latter was prohibitively expensive for most consumers, it changed the computing landscape and opened the way for much cheaper alternatives.
The advent of affordable and accessible computers empowered a broader audience to experiment with programming and technology. This democratization of computing power and the release of simpler programming languages like C++ in 1985 allowed researchers and enthusiasts to explore AI algorithms and build rudimentary AI applications on a smaller scale. Such changes brought computer programming into the mainstream, sparking interest in software development and technical skills among the general public. This newfound enthusiasm for technology paved the way for a new generation of AI researchers and enthusiasts who would later contribute to the field’s revitalization.
Japan’s bold investment in the Fifth-Generation Computer Systems Project was another critical factor that was pivotal in reviving interest in AI. Launched in the early 1980s, this ambitious endeavour aimed to surpass Western efforts in achieving artificial intelligence. Relations between Japan and the U.S. were tense and uneasy in light of changing international dynamics, accusations of espionage and unfair trade practices concerning technology. During this time, the Japanese government looked to take what they viewed as a “positive step in the creation of a new international relationship for Japan with the West, indeed with the whole world.”14 As such, they allocated a staggering $400 million over a decade to develop supercomputers capable of advanced computations, natural language processing, and solving complex problems. In other words, they sought to become world leaders in artificial intelligence.
The Fifth-Generation Computer Systems Project captured the attention of the international AI community and injected a renewed sense of competition and excitement into the field. Japan poured unprecedented resources into AI research, forcing other countries to try and keep up, igniting a race to push the boundaries of AI technology.
However, despite these encouraging developments, AI research continued to face significant challenges. The complexity of replicating human intelligence in machines, coupled with limited computational power and inadequate algorithms, hindered substantial progress in achieving genuine artificial intelligence. While there was some success with narrow AI systems which could excel at specific tasks, the ultimate goal of creating machines with human-like intelligence remained elusive. Nevertheless, although the Fifth-Generation Computer Systems Project did not achieve what it set out to do, it catapulted AI back into the forefront of discussions.
As the 1980s transitioned into the 1990s, after Japan’s ambitious project failed, artificial intelligence again found itself in a period of decline, which has become known as the “second AI winter.” Like the first AI winter, technological limitations and failures became more apparent and as a result, enthusiasm and funding declined. The public and corporate interest in AI dwindled, and the once-promising technology seemed to be losing its allure.
Despite the second AI winter’s obvious challenges, small pockets of progress and a few notable achievements kept AI’s flame burning. One of the most significant victories during this period was IBM’s “Deep Blue” project. Deep Blue was a supercomputer designed specifically to play chess at a grandmaster level. It was the culmination of years of research in parallel computing, machine learning, and chess-specific heuristics.
In the first of several historic moments for AI, Deep Blue faced off against the reigning world chess champion, Garry Kasparov in 1996. In a stunning upset, Deep Blue managed to defeat Kasparov in the first game, becoming the first computer program to win a game against a reigning world chess champion in a classical match format. While Kasparov eventually won the match, Deep Blue’s victory marked a pivotal moment for AI and demonstrated that machines could compete at the highest levels of human intellectual endeavours.
The following year, on May 11, 1997, Kasparov once again faced IBM’s supercomputer for a six-game match. Photographers and camera crews gathered on the 35th floor of Manhatten’s Equitable Center, and palpable anticipation filled the room as the chess grandmaster once again sat down to face his greatest digital opponent. Kasparov won the first game, but Deep Blue took the second.
Game three: draw.
Game four: draw.
Game five: draw.
It was a tighter contest than any had expected. Kasparov went into the final game with the match on the line. IBM’s Deep Blue sacrificed its knight, obliterating Kasparov’s defence. Deep Blue’s human counterpart rested his head in his hands before standing up and walking away from the table, raising his hands in exasperation.
In a historic turn of events, one of the greatest chess champions in history had lost a full match to a computer.
The success of Deep Blue garnered significant attention and ignited debates about the capabilities and limitations of AI. Some saw it as a groundbreaking achievement, while others argued that the victory merely demonstrated brute computational force and lacked true understanding or intelligence.
AI’s Big Bang
Despite the triumphs of Deep Blue and other notable achievements during the second AI winter, the field still faced significant challenges. The technology was not yet mature enough to fulfil the grand vision of creating intelligent machines that could understand and reason like humans across a wide range of tasks. AI researchers continued to grapple with the limitations of existing algorithms and the need for more sophisticated approaches to address complex real-world problems. However, in the early 2010s, everything changed.
In a dramatic turning point in AI’s history, the field experienced what many now refer to as “AI’s Big Bang,” driving a revolutionary breakthrough in neural networks and deep learning. At the forefront of this transformation was the pioneering work of Geoff Hinton, a renowned researcher in the field of machine learning and artificial neural networks.
Before we delve into Hinton’s remarkable contributions to AI, let’s take a moment to clarify some terms. Artificial intelligence is a catch-all term that encompasses many subcategories, like machine learning, deep learning, and neural networks.
Imagine artificial intelligence as a house with different rooms representing the various subcategories. In this (very loose) analogy, machine learning might be the kitchen. It’s a subset of AI where computers can learn and improve their performance on tasks without explicit programming. Instead, we provide the computer with examples, allowing it to deduce solutions on its own. This process is not dissimilar to the way in which a child learns from examples, enabling them to make generalizations and apply their knowledge in new situations.
Deep learning is a subcategory of machine learning. If machine learning is the kitchen, deep learning might be the refrigerator. Deep learning leverages artificial neural networks to process and understand complex patterns in data. Artificial neural networks are the fundamental building blocks of deep learning. Deep These networks consist of interconnected nodes, or neurons, mirroring the way our brains function. Information flows through these nodes, and as data passes through the layers, the network makes predictions or decisions based on the learned patterns. By employing this approach, deep learning can successfully tackle complicated tasks.
Let’s get back to Geoff Hinton.
In 2012, Geoff Hinton and his colleagues achieved a significant milestone in AI research when they won the ImageNet competition, a critical challenge in computer vision. They utilized deep learning techniques, particularly convolutional neural networks (CNNs), to dramatically improve image recognition performance. A CNN is a specific type of artificial neural network designed to process and understand visual information in much the same way the brain processes visual information by detecting edges, shapes, and objects. This breakthrough profoundly impacted the field, propelling computer vision to new heights and transforming the way AI researchers approached various tasks.
The success of deep learning in computer vision sparked an explosion of interest and investment in AI research. Corporations and governments worldwide recognized the transformative potential of AI in various industries. Major technology companies, such as Google, Facebook, Microsoft, and Amazon, started investing heavily in AI research and development, creating dedicated AI research labs and acquiring AI startups to stay at the forefront of the competition.
The rapid progress of AI technologies also led to renewed discussions about its ethical implications, as the deployment of autonomous systems and AI-powered algorithms raised questions about privacy, bias, accountability, and potential economic and cultural impacts. Governments and regulatory bodies began grappling with the challenges of creating frameworks to govern AI technologies responsibly and ensure that AI is used for the betterment of society.
On November 30, 2022, Open AI released an early version of “ChatGPT,” an AI-powered conversational agent designed to engage with users through something called “natural language processing.” The release of ChatGPT quickly became a viral sensation, capturing the general public’s imagination and sparking widespread interest in AI.
ChatGPT’s ability to generate coherent and contextually relevant responses showcased the progress made in natural language understanding and human-machine interaction. People from all walks of life started using ChatGPT for various purposes, ranging from entertainment and companionship to educational assistance and professional support.
Humanity has never been closer to realizing those ancient Greek dreams of Talos. What happens next will inevitably transform the world as we know it, but precisely how that looks remains to be seen.
BIBLIOGRAPHY
Cawthorne, Nigel. Alan Turing: The Enigma Man. E-Book. Arcturus Publishings Limited, 2014.
Chace, Calum. Surviving AI. Third Edition. Three Cs, 2020.
Good, I. J. “Speculations Concerning the First UItraintelligent Machine.” Advanced in Computers 6 (1966): 31–88.
Huxley, Julian. “Transhumanism.” Journal of Humanistic Psychology 8, no. 1 (1957): 73–76.
Koizumi, Kenkichiro. “Technology at a Crossroads: The Fifth Generation Computer Project in Japan.” Historical Studies in the Physical and Biological Sciences 37, no. 2 (2007): 355–368.
Mayor, Adrienne. Gods and Robots. Princeton, NJ: Princeton University Press, 2018.
McCarthy, J., M. L. Minsky, N. Rochester, and C. E. Shannon. “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955.” Ai Magazine 27, no. 4 (2006).
Russell, Stuart. Human Compatible: Artificial Intelligence and the Problem of Control. New York, NY: Penguin Books, 2019.
Stanford University. “Ancient Myths Reveal Early Fantasies about Artificial Life.” Stanford News, February 28, 2019. Accessed May 22, 2023. https://news.stanford.edu/2019/02/28/ancient-myths-reveal-early-fantasies-artificial-life/.
Turing, A. M. “Computing Machinery and Intelligence.” Mind, New Series 59, no. 236 (1950): 433–460.
———. “On Computable Numbers, with an Application to the Entscheidungsproblem.” Proceedings of the London Mathematical Society 2, no. 42 (1936): 230–265.
“Alan Mathison Turing (1912-54).” King’s College Cambridge. Accessed August 7, 2023. https://www.kings.cam.ac.uk/archive-centre/online-resources/online-exhibitions/alan-mathison-turing-1912-54.
NOTES
Adrienne Mayor, Gods and Robots (Princeton, NJ: Princeton University Press, 2018), 37.
Ibid., 41.
Stanford University, “Ancient Myths Reveal Early Fantasies about Artificial Life,” Stanford News, February 28, 2019, accessed May 22, 2023, https://news.stanford.edu/2019/02/28/ancient-myths-reveal-early-fantasies-artificial-life/.
Ibid., 42.
Nigel Cawthorne, Alan Turing: The Enigma Man, E Book. (Arcturus Publishings Limited, 2014), 15.
“Alan Mathison Turing (1912-54),” King’s College Cambridge, accessed August 7, 2023, https://www.kings.cam.ac.uk/archive-centre/online-resources/online-exhibitions/alan-mathison-turing-1912-54.
A. M. Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society 2, no. 42 (1936): 230–265.
Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control (New York, NY: Penguin Books, 2019), Kindle Loc. 686 of 7202.
A. M. Turing, “Computing Machinery and Intelligence,” Mind, New Series 59, no. 236 (1950): 433–460.
J. McCarthy et al., “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955,” Ai Magazine 27, no. 4 (2006).
Julian Huxley, “Transhumanism,” Journal of Humanistic Psychology 8, no. 1 (1957): 73–76.
I. J. Good, “Speculations Concerning the First UItraintelligent Machine,” Advanced in Computers 6 (1966): 31–88.
Calum Chace, Surviving AI, Third Edition. (Three Cs, 2020), Kindle Loc. 433 of 4658.
Kenkichiro Koizumi, “Technology at a Crossroads: The Fifth Generation Computer Project in Japan,” Historical Studies in the Physical and Biological Sciences 37, no. 2 (2007): 355–368.