The Church and AI: Staying Informed is More Crucial Than Ever (Part-One)
In the uncharted waters of artificial intelligence, up-to-date information is critically important.
Writer’s note: If you’ve been around for a while, you might have noticed that recently, these articles have been getting progressively longer each week. As some of you will know, this is because I am in the process of writing a book, and these articles will form the bulk of the first draft. Two birds, one stone, and all that.
There are three statements I’d like to make before we dive in:
Because of the size of this article, I will post it in two parts, so for this week only, you will receive three emails from me before a break for Christmas (and then back to one or two at most).
After the seven guidelines are complete, we’ll return to standard-sized articles (on the whole). If you do not like this longer content, stick with me! It’ll be shorter soon!
I have incorporated some material from previous posts in this article. This certainly won’t be a regular occurrence by any means, but I wanted to include some of those details in what may turn out to be part of the book.
Thanks for joining me on this adventure. Let’s continue exploring the Church and AI together.
Imagine artificial intelligence as a vast, unknowable ocean, incomprehensibly deep and distant, and as capable of destroying life as cultivating it. At the intersection of Church and AI, you are Columbus, tasked with navigating, taming, exploring, and leading your people into uncharted territory.
In the previous articles, we’ve explored some of the necessary tools and resources to ready yourself to set sail on this grand adventure. A plan is starting to form.
The metaphor is starting to form.
Your crew represents the importance of championing authentic, meaningful—and most importantly, human—relationships in a world that is growing more digitally connected but increasingly socially disconnected. Just as a captain relies on a loyal, well-coordinated and tight-knit crew to navigate the seas, church leaders and congregations will need to prioritize relationships above all else to manage the coming changes effectively.
Your ship is a symbol of the need for congregational resilience. In the same way that a ship must be strong, resilient, and well-maintained to withstand the unpredictable seas, congregations must be adequately prepared to weather the coming societal, cultural, and technological storms that are approaching.
Your sails are a picture of the need for adaptable structures within the church. Just as sails must stand firm in the face of powerful winds and waves, they must also be flexible in order to adjust course when necessary. Likewise, leaders should cultivate within their congregations a semi-Luddite attitude in relation to technology: willing to stand firm against those destructive winds of change that imperil the Great Commission but simultaneously adaptable to the shifting sands and storms of this world.
Your tools reflect the need to embrace positive technological developments. Columbus embraced the most cutting-edge technology available to him in his day, such as the use of a magnetic compass and an astrolabe for celestial navigation. These technologies helped sailors to explore new horizons with accuracy and precision. Similarly, when the Church embraces positive technological developments, leaders can discover new ways to venture into unexplored territories and effectively fulfil the Great Commission.
Today’s article explores the fifth guideline:
Guideline #5: Stay Informed in a Rapidly Changing Environment.
Let’s continue our sea-faring analogy.
As captain, your role is to observe the seas and skies constantly and use the most up-to-date navigational charts available to you to make the best possible decisions. Just as a skilled mariner must understand and respond to the changing conditions of the sea, church leaders must stay informed about the changing world to guide their congregation wisely.
Staying informed about the developments in AI is a critical component of our seven guiding principles. Depending on your personality type, it might feel either too trivial or overwhelming to keep up with. This week we’ll seek to fight both extremes by asking two fundamental questions:
(Part One) Why should we stay informed about AI?
(Part Two) How should we go about following this vast torrent of news?
Why Should We Stay Informed?
Perhaps a thousand different issues, topics, and concerns are floating around in your mind. Many of them require constant attention. Why should artificial intelligence be one of them?
The fact that you’re reading this tells me that you have at least some interest in the subject.
We won’t cover all of the reasons to stay informed—that’s a topic worthy of a whole book—but below are some broad considerations. As shown by the diagram below, we’ll start with the smaller, more immediate reasons and progress to the larger, seemingly more distant ones.
Regarding AI, the Church must be proactive, not reactive.
Perhaps the most critical reason for the Church to stay informed in this rapidly changing world is to make proactive rather than reactive decisions in light of artificial intelligence.
As we will see below, there are risks that AI is a Pandora’s box that, once opened, will be practically impossible to shut. If church leaders can stay reasonably informed about the cultural, technological, societal, economic, existential and ethical challenges that are looming on the horizon, they can ensure that they are adequately prepared to meet these changes head-on in a way that glorifies God and draws people to him.
The Book of Proverbs celebrates the virtues of diligence and foresight, or to put it another way, the need to be proactive rather than reactive. Here are just a few examples:
Proverbs 6:6-8 (CSB): “Go to the ant, you slacker! Observe its ways and become wise. Without leader, administrator, or ruler, it prepares its provisions in summer; it gathers its food during harvest.”
Just as ants are self-motivated, proactive, and prepare their provisions in periods of abundance, the church can be proactive in staying informed and preparing for the staggering changes that might be waiting for us in light of artificial intelligence.
Proverbs 21:5 (CSB): “The plans of the diligent lead to profit as surely as haste leads to poverty.”
This verse encourages a thoughtful, well-planned, and diligent approach to tasks and challenges. Similarly, by continually staying informed about the current state of AI, the Church can carefully plan its response and avoid reckless decisions. In so doing, leaders can proactively navigate the complexities of the technology in a way that honours the Lord.
Proverbs 22:3 (CSB): '“A sensible person sees danger and takes cover; the inexperienced keep going and are punished.”
In this verse, the author highlights the foresight of a sensible person who identifies and avoids danger. In the context of AI, the Church would similarly do well to be aware of the potential challenges and ethical dilemmas that AI might present and take proactive steps to mitigate risk.
By engaging with AI early and keeping on top of developments, the church can offer well-reasoned, prepared responses that elevate God-honouring principles and biblical theology while advocating for ethical, morally sound use of these technological advances.
On the other hand, if the Church is reactive rather than proactive, it may find itself in a position where its leaders are continually scrambling to address the aftermath of technological developments, leading to a disconnect with the needs and realities of the community, especially the younger, more technologically literate generations.
The Church can recognize and address imminent cultural, economic and social change.
More immediately pressing than existential crises at present, it is wise for the Church to pay close attention to the imminent cultural, economic, and technological changes that are taking place in our society. The use of increasingly advanced AI in society will undoubtedly affect many facets of culture, and is therefore worthy of considerable focus.
In a previous article, we discussed how our digital presence is cannibalizing our physical and emotional ones and how AI is set to make it worse. However, there are myriad ways in which artificial intelligence could conceivably change our culture and societal norms in drastic ways.
Education may be greatly supported by the provision of specialized AI tutors at negligible cost,1 allowing human educators to “focus less on the rote aspects of imparting knowledge and more on building emotional intelligence, creativity, character, values, and resilience in students.”2 However, these AI tutors and technology could also be a recipe for distraction and disinformation. Such systems can “track an individual’s online reading habits, preferences, and likely state of knowledge, [and] tailor specific messages to maximize impact on that individual while minimizing the risk that the information will be disbelieved.”3
As well as education, such systems and approaches will continue to affect social media in general and, in turn, influence the political and cultural landscape. Without ethical oversight, these systems may drive citizens into increasingly siloed perspectives, expose them to torrents of disinformation or artificial realities, and put groups at risk from nefarious regimes coopting information for their agendas. Regarding the former, some experts have predicted that as much as 90% of online content “may be synthetically generated by 2026,” ushering in an information apocalypse—where disinformation and deepfake technology make it increasingly difficult to separate fact from fiction.4
Practically every iteration of AI-based technology brings additional ethical questions. Discussions around copyright and intellectual property laws, privacy protection, and digital security are at the forefront of these considerations. However, society must also grapple with the inherent bias found in artificial intelligence systems. While one may assume that AI systems are objective as they base their decisions on pure data optimization, it is vital to recognize that the mass of data which informs such decisions are embedded with the preferences and prejudices of its authors. “If,” as John Lennox notes, “the ethical programmers are informed by relativistic or biased ethics, the same will be reflected in their products.”5
Furthermore, AI systems are commonly referred to as “black boxes.” In other words, given the nature in which many artificial intelligence systems arrive at their conclusions—usually consisting of incalculable volumes of data for a single decision—it is almost impossible to discern the processes behind the answers. This is hugely problematic in the financial sector or when looking to AI for a decision regarding critical health decisions.6
In addition to these issues, it is also essential to ask how one might control an AGI system and how to define whether or not developments in AI have reached a danger point in an industry where the economic incentive to continue is perhaps too strong for nations and corporations to proceed with caution. How humans grapple with these economic, cultural, and ethical realities will be some of the defining decisions of this century.
It is crucial that church leaders are at least loosely aware of these sorts of developments in order to preach, teach, and pastor accordingly. The very real potential of the changes listed above requires us to have strong theological foundations in place in a host of areas, as alluded to in a previous article: “What does the Bible say about artificial intelligence?”
We can be best prepared when we understand the gravity of the subject we are discussing.
Additionally, The Church must confront dangerous ideologies such as Transhumanism.
Transhumanism is an insidious ideology that could increasingly warrant discussion. If the transhumanist movement continues towards its more extreme trajectories, the Church will be critical in leading people in healthier directions. As such, staying current with updates on these ideological visions is vital.
Let me explain.
In many ways, artificial intelligence can be enormously positive for physical and mental health. AI should “enable researchers to unravel and master the vast complexities of human biology and thereby gradually banish disease.43 The healthcare industry can cut costs, reduce errors with increasingly automated surgeries,44 and perhaps even produce neural-implant or human augmentation technologies “that will replace and improve [one’s] auditory perception, image processing, and memory.”45 While early versions of this technology are generally positive, the transhumanists sit on the spectrum’s other, darker extreme.
Fundamentally, transhumanists seek to use technology to improve human capability. However, the idea goes much further. Sachin Rawat’s definition of the term is helpful, if not a little troubling:
“Transhumanism is a philosophical movement that aims to free the human body and mind of their biological limitations, allowing humanity to transcend into a future unconstrained by death.”7
As an idea, transhumanism is not new. It is heavily influenced by the Enlightenment belief that we can achieve real progress through reason and science. Such an emphasis has, understandably, led to growing faith in the ability of technology to overcome the obstacles posed by the limitations of the human condition. The 20th Century also saw science fiction popularizing transhumanist thinking. As we saw in A Brief History of AI, Julian Huxley touched on the subject in Brave New World, and it found an audience in 2001: A Space Odyssey. I’m reliably informed by my friend Jorin that even Star Trek explores the idea.
However, as we’ll see, the transhumanist worldview has distant echoes of much older schools of thought.
But first, let’s be clear here.
In many ways, aspects of this sort of thinking have been phenomenally positive for the human race.
It’s likely that you know someone who has had a hip or knee replacement (or two), cochlear implants to counteract hearing loss, or any other number of technological enhancements to overcome the frustrations of biological hindrances. Maybe you’re even wearing glasses to read this article. Transhumanism is simply the extreme conclusion of the direction we’re already headed.
Meghan O’Gieblyn notes this in her book God, Human, Animal, Machine. She observes that transhumanists believe “we'll have similar neural-implant technologies that will replace and improve our auditory perception, image processing, and memory.”3
So far, so good.
However, take these worldviews to their logical conclusions and the tone changes.
What O’Gieblyn writes next is startling:
According to this thinking, consciousness can be transferred onto all sorts of different substrates: our new bodies might be supercomputers, robotic surrogates, or human clones. But the ultimate dream of mind-uploading is total physical transcendence—the mind as pure information, pure spirit. “We don’t always need real bodies,” Kurzweil writes in The Age of Spiritual Machines. He imagines that the posthuman subject could be entirely free and immaterial, able to enter and exit various virtual environments.4
We’re starting to see where this is going.
If you’re not concerned yet, keep reading. What you’re about to read matters because, by some estimates, we might be there by the 2040s.8 This is where we get to the heart of the transhumanist discussion:
In his book You Are Not A Gadget, the computer scientist Jaron Lanier argues that just as the Christian belief in an immanent Rapture often conditions disciples to accept certain ongoing realities on earth—persuading them to tolerate wars, environmental destruction, and social inequality—so too has the promise of a coming Singularity served to justify a technological culture that privileges information over human beings. “If you want to make the transition from the old religion, where you hope God will give you an afterlife,” Lanier writes, “to the new religion, where you hope to become immortal by getting uploaded into a computer, then you have to believe information is real and alive.”9 (Emphasis added)
Wow.
There is something deeply troubling and inherently Gnostic about this perspective.
Gnostics—an ancient group declared to be heretical to the Christian faith—sought to transcend the limitations of the material world through special knowledge. In the same way, transhumanists seek to transcend the limitations of the human condition through special knowledge (and resultant technology). Both gnostics and transhumanists share the idea of “transcendence.” Indeed, the ultimate goal is to move beyond the material altogether.
The similarities are startling, and ultimately, as prominent Church Fathers like Justin Martyr and Irenaeus noted in the first centuries after Christ, they are antithetical to the Christian faith.
It’s important to highlight that describing transhumanism as a genuine Gnostic heresy would be an overstatement. While there are echoes of the same reasoning, transhumanism is fundamentally secular, whereas Gnosticism is spiritual in nature.
At its extreme, however, transhumanism is a false Gospel. Here’s why:
The true Gospel of Jesus Christ tells us that we live in a fallen world damaged by the effects of sin and separate from a holy, loving God. (Gen. 3; Rom. 3:23). The just punishment for our sin is death (Rom. 6:23).
Jesus Christ, both fully God and fully man, paid our debt on the cross and rose again, victorious over sin and death. Whoever believes in Jesus and trusts him can share in that victory (1 Cor. 15:57) and is no longer bound by death. Instead, they will experience eternity in a restored relationship with the Lord God Almighty in heaven (John 3:16).
The counterfeit gospel of transhumanism tells us that we live in a fallen world where material realities are hindering us from experiencing life in all its fullness. One of these effects is death.
We are told that Artificial Intelligence and the singularity will find a way for humans to live forever. Whoever believes in transhumanism and places their faith in a special knowledge related to superintelligent technology will transcend the bounds of this broken world and experience eternity in an endless conscious existence.
There are similarities but some big problems.
With transhumanism, mankind is trying to reach the heavens and make a name for themselves, dangerously excluding the omnipotent, incomprehensibly holy God of the Universe.
It’s a 21st-century Tower of Babel.
As the Church, we must keep informed of developments in this expressly counterfeit gospel and be prepared to engage with a world that potentially loses interest in questioning the realities of life after death. From the perspective of an atheist, why should someone put their faith in Jesus for the possibility of eternal life when they could (in their minds) put their finances into a transhumanist company and guarantee it?
From this vantage point, it is easy to see why people like American political scientist Francis Fukuyama would describe transhumanism as “the world’s most dangerous idea.” As such, shepherds must stay informed for the safety of their flock. However, the dangers are larger than transhumanism alone.
The Church must recognize and address the potential existential challenges of AI.
A very real reason for the Church to stay informed regarding artificial intelligence is the potential existential danger that the technology might pose. I must stress here: I do not think this is the most important reason, but it is worth our attention nevertheless. Here’s why:
In June 2023, dozens of prominent AI scientists and other notable figures signed a succinct but powerful statement. It simply said, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”10
Among the signatories were OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, and Kevin Scott, Microsoft's CTO. Importantly, it was not the first statement of its kind this year. In March, over 31,000 people signed an open letter citing the “profound risks to society and humanity” due to AI, expressing alarm at the lack of planning and management in light of such realities.11
Clearly, the rise of artificial intelligence has sparked heated debate among experts, with some warning of threats to humanity and others optimistic about its potential.
Perhaps this is propaganda. Possibly, it’s undue alarmism. However, it certainly appears that there is some legitimate reason to be concerned. Why are so many AI scientists and leaders sounding the alarm?
Artificial Intelligence, in its current form, has shown remarkable progress in areas like machine learning and natural language processing in a short period of time. It already has the power to enhance our lives and revolutionize industries, but there is ever-growing fear around the potential for AI to surpass human intelligence and gain autonomous decision-making abilities, a concept generally referred to as “superintelligence.”
If you’ve recently opened up a large language model such as ChatGPT or Google Bard (at the time of writing), you’ll recognize that this still feels like a long way off. However, experts are concerned that the risks could be catastrophic should a superintelligent AI emerge.
But why? Here are some potential reasons:
Nick Bostrom, author of Superintelligence, describes a potential phenomenon called infrastructure profusion, “where an agent transforms large parts of the reachable universe into infrastructure in the service of the same goal, with the side effect of preventing the realization of humanity’s axiological potential.”12 He continues, imagining a paperclip-producing AGI [artificial general intelligence] problem:
An AI, designed to manage production in a factory, is given the final goal of maximizing the manufacture of paperclips, and proceeds by converting first the Earth and then increasingly large chunks of the observable universe into paperclips.13
In other words, an AGI may decimate the planet in pursuit of its singular objective. While it’s unlikely that paperclips will be the cause of our demise, the overarching concept is far from implausible.
Another example: If you’re a superhero aficionado, you may remember that in the second Avengers movie, Ultron’s directive was to bring peace. In service of such a goal, Ultron decided that humans were the greatest threat and had to go. Could our world be destroyed in pursuit of a goal inadvertently misaligned with human interests?
The risks go further still. Take the growing interest in autonomous weaponry.
In 2017, the Future of Life Institute debuted a short film at the United Nations Diplomatic Conference in Geneva. It was called Slaughterbots. The video imagines a world where drones the size of small birds, equipped with AI, facial recognition, and enough explosives to kill a human target, are unleashed on the world.
Slaughterbots is six years old, and although it is a work of fiction, it was produced to communicate an important message: we are closer to this reality than many realize. Indeed, it is sponsored by Stuart Russell, author of Human Compatible and a noted decades-long expert in the field. According to Forbes,14 the U.S., China, the U.K., Russia, India, and Turkey are all working on this technology, but Israel is in the lead.15 They used an AI drone swarm to locate and attack Hamas militants as far back as 2021 and have reportedly used the technology in combat since.
“Not only will these killer robots become more intelligent, more precise, more capable, faster, and cheaper,” writes Kai-Fu Lee, co-chair of the Artificial Intelligence Council at the World Economic Forum, “but they will also learn new capabilities such as how to form a swarm, with teamwork and redundancy, making their missions virtually unstoppable. A swarm of ten thousand drones that could wipe out half a city could theoretically cost as little as $10 million.”16
As costs plummet and proprietary software becomes increasingly open-sourced, it is easy to imagine a situation where this sort of technology is abused by malfeasant operators or rogue superintelligent AI with nefarious purposes.
You might think, “couldn’t we just flick the power switch if AI goes haywire?” Or possibly, “why do AI experts feel that now is the time to sound the alarm?”
The problem? It’s an arms race on two fronts: humans versus humans and humans versus this hypothetical but looming superintelligence.
Here are a few snippets from Calum Chaces’s book Surviving AI that explain the challenges well:
If there was a widespread conviction that superintelligence is a potential threat, could progress toward it be stopped? Could we impose “relinquishment” in time? Probably not, for three reasons.17
…First, it is not clear how to define “progress towards superintelligence”, and therefore we don't know exactly what we should be stopping. If and when the first AGI appears, it may well be a coalition of systems which have each been developed separately by different research programmes.18
…Secondly, to be confident of making relinquishment work we would have to ban all research on any kind of AI immediately, not just programmes that are explicitly targeting AGI. That would be an extreme over-reaction.19
…The third reason why relinquishment is hard is the most telling. The incentive to develop better and better AI is just too strong. Fortunes are already being made because one organisation has better AI than its competitors, and this will become ever more true as the standard of AI advances.20
Individuals, companies, and even nations risk being left behind if they choose not to participate in artificial intelligence while others do, which is a worrisome prospect. Whether it’s superintelligence, automated weaponry, economic advantages, totalitarian regimes, or any number of other potential issues, the biggest challenge is that it will take a united, concerted effort to reduce the risks.
This is yet another reason why the collective body of Christ must stay vigilant in following the developments taking place in the field of artificial intelligence. However, while existential crises are possible and worthy of attention, they are far from probable at this point.
Clearly, there is good reason for church leaders to stay informed in a rapidly changing environment. But how can we do that?
In part two, we’ll address that very question.
Russell, Human Compatible: Artificial Intelligence and the Problem of Control, Kindle loc. 1827 of 7202.
Lee and Qiufan, AI 2041, Kindle loc. 2122 of 7291.
Russell, Human Compatible: Artificial Intelligence and the Problem of Control, Kindle Loc. 1893 of 7202.
Europol Innovations Lab, Facing Reality? Law Enforcement and the Challenge of Deepfakes (Luxembourg: Publications Office of the European Union, 2022).
Lennox, 2084: Artificial Intelligence and the Future of Humanity, 149.
Thomas H. Davenport and Rajeev Ronanki, “Artificial Intelligence for the Real World,” in HBR’s 10 Must Reads On AI, Analytics, and the New Machine Age (Boston, MA: Harvard Business Review Press, 2019), Kindle loc. 223 of 3348.
Rawat, “Transhumanism: Savior of Humanity or False Prophecy?,” Big Think, July 27, 2022, accessed May 5, 2023, https://bigthink.com/the-future/transhumanism-savior-humanity-false-prophecy/.
Kai-Fu Lee and Chen Qiufan, AI 2041 (New York: Currency, 2021), Kindle Loc. 7198 of 7921.
O’Gieblyn, God, Human, Animal, Machine, Kindle Loc. 1012 of 3853.
“Statement on AI Risk | CAIS,” accessed June 2, 2023, https://www.safe.ai/statement-on-ai-risk#sign.
“Pause Giant AI Experiments: An Open Letter,” Future of Life Institute, March 22, 2023, accessed June 2, 2023, https://futureoflife.org/open-letter/pause-giant-ai-experiments/.
Nick Bostrom, Superintelligence (Oxford: Oxford University Press, 2014), 150.
Ibid.
David Hambling, “Israel Rolls Out Legion-X Drone Swarm For The Urban Battlefield,” Forbes, last modified October 24, 2022, accessed June 2, 2023, https://www.forbes.com/sites/davidhambling/2022/10/24/israel-rolls-out-legion-x-drone-swarm-for-the-urban-battlefield/.
David Hambling, “Israel Used World’s First AI-Guided Combat Drone Swarm in Gaza Attacks,” New Scientist, last modified June 30, 2021, accessed June 2, 2023, https://www.newscientist.com/article/2282656-israel-used-worlds-first-ai-guided-combat-drone-swarm-in-gaza-attacks/.
Kai-Fu Lee and Chen Qiufan, AI 2041 (New York: Currency, 2021), Kindle loc. 5182 of 7291.
Calum Chace, Surviving AI, Third Edition. (Three Cs, 2020), Kindle loc. 3311 of 4658.
Ibid., Kindle loc. 3314 of 4658.
Ibid., Kindle loc. 3322 of 4658.
Ibid., Kindle loc. 3326 of 4658.