Global Leaders Urged to Address AI's Extinction Risk - How should the Church respond?
Why we should have confidence in the face of very real and imminent concerns.
In the last few days, dozens of prominent AI scientists and other notable figures signed a succinct but powerful statement:1
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Among the signatories are OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, and Kevin Scott, the CTO for Microsoft, and importantly, it is not the first of its kind this year.
In March, over 31,000 people signed an open letter citing the “profound risks to society and humanity” by AI, an expressing alarm at the lack of planning and management in light of such realities.2 It was signed by equally prominent figures in the field.
Two important questions are worth asking as concern around the use of artificial intelligence increases:
Does humanity really face an existential risk from AI?
How should the global Church respond to such claims?
Does humanity really face an existential risk from AI?
Clearly, the rise of artificial intelligence has sparked heated debate among experts, with some warning of threats to humanity and others optimistic about its potential.
However, the balance is tipping toward the former. It seems that, yes, there is reason to be concerned.
So why are so many AI scientists and leaders sounding the alarm?
Artificial Intelligence, in its current form, has shown remarkable progress in areas like machine learning and natural language processing in a short period of time. It already has the power to enhance our lives and revolutionize industries, but there is ever-growing fear around the potential for AI to surpass human intelligence and gain autonomous decision-making abilities, a concept generally referred to as “superintelligence.”
If you’ve opened up ChatGPT recently, you’ll recognize that this still feels like a long way off. However, experts are concerned that should a superintelligent AI emerge, the risks could be catastrophic.
But why? Here are some potential reasons:
Nick Bostrom, author of Superintelligence, describes a potential phenomenon called infrastructure profusion, “where an agent transforms large parts of the reachable universe into infrastructure in the service of the same goal, with the side effect of preventing the realization of humanity’s axiological potential.”3 He continues, imagining a paperclip-producing AGI [artificial general intelligence] problem:
An AI, designed to manage production in a factory, is given the final goal of maximizing the manufacture of paperclips, and proceeds by converting first the Earth and then increasingly large chunks of the observable universe into paperclips.4
In other words, an AGI may decimate the planet in pursuit of its singular objective. While it’s unlikely that paperclips will be the cause of our demise, the overarching concept is far from implausible.
Another example: If you’re a superhero aficionado, you may remember that in the second Avengers movie, Ultron’s directive was to bring peace. In service of such a goal, Ultron decided that humans were the greatest threat and had to go. Could our world be destroyed in pursuit of a goal inadvertently misaligned with human interests?
The risks go further still. Take the growing interest in autonomous weaponry.
In 2017, the Future of Life Institute debuted a short film at the United Nations Diplomatic Conference in Geneva. It was called Slaughterbots. The video imagines a world where drones the size of small birds, equipped with AI, facial recognition, and enough explosives to kill a human target, are unleashed on the world. See the video below (viewer discretion is advised).
Slaughterbots is five years old, and although it is a work of fiction, it was produced to communicate an important message: we are closer to this reality than many realize. Indeed, it is sponsored by Stuart Russell, author of Human Compatible and a noted decades-long expert in the field. According to Forbes,5 the U.S., China, the U.K., Russia, India, and Turkey are all working on this technology, but Israel is in the lead.6 They used an AI drone swarm to locate and attack Hamas militants as far back as 2021 and have reportedly used the technology in combat since.
“Not only will these killer robots become more intelligent, more precise, more capable, faster, and cheaper,” writes Kai-Fu Lee, co-chair of the Artificial Intelligence Council at the World Economic Forum, “but they will also learn new capabilities such as how to form a swarm, with teamwork and redundancy, making their missions virtually unstoppable. A swarm of ten thousand drones that could wipe out half a city could theoretically cost as little as $10 million.”7
As costs plummet and proprietary software becomes increasingly open-sourced, it is easy to imagine a situation where this sort of technology is abused by either malfeasant operators or rogue superintelligent AI with nefarious purposes.
These are just two reasons why experts are concerned. Space will not allow us to discuss the potentially disastrous economic, societal, and codependency issues that may cause the world in which we live to crumble (we’ll table those discussions for another day)…but it would be foolish to ignore such challenges.
The problem: it’s an arms race
You might be thinking, “couldn’t we just flick the power switch if AI goes haywire?” Or possible, “why do AI experts feel that now is the time to sound the alarm?”
Put simply, we are in an arms race, both with humans and with this hypothetical but looming superintelligence.
Here are a few snippets from Calum Chaces’s book Surviving AI that explain the challenges well:
If there was a widespread conviction that superintelligence is a potential threat, could progress toward it be stopped? Could we impose “relinquishment” in time? Probably not, for three reasons.8
…First, it is not clear how to define “progress towards superintelligence”, and therefore we don't know exactly what we should be stopping. If and when the first AGI appears, it may well be a coalition of systems which have each been developed separately by different research programmes.9
…Secondly, to be confident of making relinquishment work we would have to ban all research on any kind of AI immediately, not just programmes that are explicitly targeting AGI. That would be an extreme over-reaction.10
…The third reason why relinquishment is hard is the most telling. The incentive to develop better and better AI is just too strong. Fortunes are already being made because one organisation has better AI than its competitors, and this will become ever more true as the standard of AI advances.11
Individuals, companies, and indeed even nations run the risk of being left behind if they choose not to participate in artificial intelligence while others do, which is a worrisome prospect. Whether it’s superintelligence, automated weaponry, economic advantages, totalitarian regimes, or any number of other potential issues, the biggest challenge is that it will take a united, concerted effort to reduce the risks.
And “united” is not a word easily assigned to our world right now.
Which leads us to the question: if, as it is claimed, we are standing on the precipice of potential existential crises, how are we to respond as the Church?
The Church’s Response
For Christians, there are several important factors to bear in mind as we navigate the more foreboding aspects of an AI future. For a more detailed look at what the Bible says about artificial intelligence, consider reading last week’s article:
In summary: the article reminds us that we must recognize we are made in God’s image; we are called to good stewardship and service of the poor; we must maintain a strong theology of work and avoid idolatry, avoid pride, and find wisdom in God alone. However, given the threat of a supposed existential crisis, there are a few other truths that we must remember:
We serve a sovereign King
God’s incommunicable attributes—such as His infinitude, His self-existence, His immutability, His omniscience, omnipresence, omnipotence, and utter holiness—give His the absolute ability to act in all circumstances. However, God’s sovereignty is His absolute right to do so.
In great detail, Romans 9 reminds us that there is only one throne over all, and it belongs to the Lord of all Creation. Romans 8:28 tells us that God works all things together for the good of those who love Him, who have been called according to His purpose. Proverbs 19:21 reminds us that “Many plans are in a person’s heart, but the Lord’s decree will prevail.” We know that God works everything out in agreement with the purpose of His will (Eph. 1:11).
If it is true that we serve a sovereign King—and the Bible says it is—there can be no truly existential threat that would thwart God’s will. There will be no extinction risk.
We know that Jesus will one day return (Acts 1:11; 1 Thess. 4:16-17; Rev. 1:7), and He will return to existent humanity. With every potentially catastrophic issue (such as nuclear threats and climate change), we must keep in mind this important and comforting truth.
Jesus will not return to an empty, barren wasteland with great fanfare and suddenly realize that humans annihilated themselves before he got here.
But while this should bring us confidence regarding the future, there is no room for complacency. The world will not be destroyed, but that doesn’t mean it is immune from devastating and irreparable damage. Thus we remember our call to stewardship, wisdom, and faithfulness.
Importantly, while there is room for concern, there is no room for anxiety.
We must not give ourselves over to sinful anxiety
In her excellent book None Like Him, Jen Wilkin describes the issue of future-focused anxiety and provides helpful encouragement:
We feed anxiety when we live in dread of the future . . . Our prayeres become marked with requests to know the future rather than requests to live today as unto to the Lord. Jesus reminds us not to be anxious for the future, “for tomorrow will be anxious for itself. Sufficient for the day is its own trouble” (Matt. 6:34). The antidote to anxiety is to remember and confess that we can trust the future to God. This does not mean that we make no preparation for the future, but that we prepare in ways that are wise rather than in ways that are fearful.12
We do not need to worry about the future (Phil. 4:6) because we trust our heavenly Father (1 Peter 5:7), knowing that He is a refuge and stronghold for every generation who looks to Him (Psa. 46:1; 90:2; Phil. 4:19).
We exercise an appropriate level of awareness and caution, but we do so with the ultimate confidence that our trust is in the Lord alone, and His plans will always prevail.
Our heavenly home awaits us
One of the great joys of the grace-filled Gospel is that, as followers of Jesus, our true home is with our Father in heaven (Phil. 3:20; Heb. 13:14-16; Rev. 21:3-4). Jesus commands us not to store up for ourselves treasures on earth, where moths and rust destroy and where thieves break in and steal, but to store up for ourselves treasures in heaven. Why? Because where our treasure is, there our heart will also be (Matt. 6:19-21).
As believers, we celebrate the fact that Jesus has rescued us from this present evil age (Gal. 1:4), but there is still work to do. Whether or not there are looming existential threats, our call to preach the good news of Jesus and play our God-ordained roles in this great salvation story remains the same (Matt. 28:18-20).
Whatever happens with AI and supposed existential threats, we can (and indeed, should) have total confidence in our Great King - the one true and unchanging constant in existence.
What do you think? Would you add anything? Do you have any questions? Are there any other issues we should be discussing? Let me know in the comments section below. We’d love to hear your thoughts.
If you’ve found this article helpful, please consider sharing it with friends and pastors to help prepare believers for the future of AI in the Church!
NOTES
“Statement on AI Risk | CAIS,” accessed June 2, 2023, https://www.safe.ai/statement-on-ai-risk#sign.
“Pause Giant AI Experiments: An Open Letter,” Future of Life Institute, March 22, 2023, accessed June 2, 2023, https://futureoflife.org/open-letter/pause-giant-ai-experiments/.
Nick Bostrom, Superintelligence (Oxford: Oxford University Press, 2014), 150.
Ibid.
David Hambling, “Israel Rolls Out Legion-X Drone Swarm For The Urban Battlefield,” Forbes, last modified October 24, 2022, accessed June 2, 2023, https://www.forbes.com/sites/davidhambling/2022/10/24/israel-rolls-out-legion-x-drone-swarm-for-the-urban-battlefield/.
David Hambling, “Israel Used World’s First AI-Guided Combat Drone Swarm in Gaza Attacks,” New Scientist, last modified June 30, 2021, accessed June 2, 2023, https://www.newscientist.com/article/2282656-israel-used-worlds-first-ai-guided-combat-drone-swarm-in-gaza-attacks/.
Kai-Fu Lee and Chen Qiufan, AI 2041 (New York: Currency, 2021), Kindle loc. 5182 of 7291.
Calum Chace, Surviving AI, Third Edition. (Three Cs, 2020), Kindle loc. 3311 of 4658.
Ibid., Kindle loc. 3314 of 4658.
Ibid., Kindle loc. 3322 of 4658.
Ibid., Kindle loc. 3326 of 4658.
Jen Wilkin, None Like Him (Wheaton, IL.: Crossway, 2016), 75-76.