Since publishing The Church and AI: Seven Guidelines for Navigating the Digital Frontiers last May, the proliferation of artificial intelligence in everyday life has only increased.
In all honesty, while the advances in technology have certainly been interesting, I’ve not found much that I felt was worth adding to the content already included in the book. . .
. . .Except for one thing.
Warning: This is a fairly “hot take.”
Humour me here:
I think we should make a habit of treating artificial intelligence like humans.
Don’t worry. I hear you, there, in the back. No, this actually isn’t sacrilege. No, believe it or not, this isn’t heresy.
If you’ll give me a few minutes of your time, I’ll explain.
Imagine walking into a coffee shop and watching someone—we’ll call her Karen—barking orders at the barista. No greeting, no “please,” no “thank you.” Just curt demands and a dismissive, slightly raised voice.
Now, like that coffee Karen has just snatched from that poor barista’s fingertips, imagine this attitude slowly filtering—percolating—into every facet of our culture.
I’m increasingly aware of the fact that we can so easily fall into similar speech patterns and abrasiveness when interacting with AI-powered virtual assistants. And though the tone and manner of our interactions with artificial intelligence might seem trivial, I’m convinced they may have profound implications for our relationships, our communities, and even our spiritual formation over the long haul if we’re not careful.
For Christians, whose very lives are shaped by the call to recognize and honour the imago Dei in every person, these interactions matter more than we assume.
The Bible is clear that our language forms hearts and shapes communities. For example, James warns us about the tongue’s capacity to bless and curse (Jam. 3:5-10), and Paul exhorts the early Church to use words that are wholesome and build up rather than tear down (Eph. 4:29). We could go on.
Yet, when addressing AI—entities without feelings or consciousness—we can often allow our linguistic guard to fall. I’ve found in myself and others a tendency to speak bluntly, dismissively, or impatiently because it’s a machine, so we assume it doesn’t matter. And when talking specifically about the machine, I suppose it doesn’t.
But every interaction influences us. On page 177 of The Church and AI, I wrote this:
Winston Churchill once commented, “We shape our buildings and afterwards they shape us.” It’s the same with technology. It’ll undoubtedly be the same with artificial intelligence. Perhaps we could refer to it as the “technological formativity of AI.” As we shape AI, it’ll shape us in return. It already has, whether we want to admit it or not."
As the percentage of our social interactions with large language models increases, so too will the extent to which it will shape our human conversations. If we become accustomed to speaking harshly or commandingly to our digital tools, I fear it may become easier to adopt similar approaches to the people around us.
To put it another way, I’m concerned that our interactions with very human-like AI will change the way we talk to real, flesh-and-blood humans
Let me share just three of the myriad possibilities:
Someone may develop a habit of speaking impatiently to AI-powered customer service bots, using curt commands, dismissive language, and constant interruptions. Over time, these patterns of speech—rooted in transactional, goal-orientated interactions—will very likely impact one’s interactions with family members, coworkers, or service staff.
A child watches as their parents interact rudely with an artificial intelligence language model such as ChatGPT, Claude, Grok, or Gemini. In their innocence, they internalize these speech patterns, assuming that this sort of behaviour is acceptable when addressing others—particularly those perceived as less important.
As we become accustomed to increasingly efficient, nuanced, and logical responses from an AI service, we become increasingly impatient with the sorts of human interactions that require patience, nuance, or time.
With these sorts of possibilities on the horizon, I’m proposing that we treat artificial intelligence like humans so that we don’t treat humans like artificial intelligence.
The Evidence
Look at some of the statistics:
A University of Michigan meta-analysis of 14,000 college students found that the biggest drop in empathy occurred after the year 2000, and by the year 2010, college kids were “about 40 percent lower in empathy than their counterparts of 20 or 30 years ago”.
In 2016, a survey conducted by the AP-NORC Centre found that 74% of Americans felt that manners and behaviour had deteriorated over the past several decades. Another study—the ABA Survey of Civic Literacy—found that 85% of respondents believed that civility in 2023 was worse than it was in 2013.
In her book Alone Together, Turke found that technology encourages shallow or controlled connections, arguing that people turn to online interactions to avoid the risks of intimacy, resulting in a less authentic community.
We’re on a slippery slope of decreasing manners and declining socialities. Now, we mustn’t fall into the false correlation trap, of course; there are plenty of factors that contribute to these trends. But few can deny that the internet and social media loom like spectres in this discussion. Their impact is unquestionable; the issue is the level of their effect.
It is certainly not beyond the scope of reason to suggest that as we’ve become increasingly digitized, we’ve also become decreasingly socialized.
Furthermore, consider the online disinhibition effect.
The online disinhibition effect describes how people often express themselves more freely and with less restraint in online environments compared to face-to-face interactions. In part, scholars suggest, this is because the internet provides a means by which people can remain anonymous, affording people the chance to say things they might not in real life.
But here’s the problem with the online disinhibition effect in today’s world:
The proximity of artificial intelligence—particularly the evolving speech capabilities of tools like Google AI Studio, means that the lines between navigating the digital frontiers and touching grass in the “real world” are increasingly blurred.
So what do we do?
This is where that “hot take” comes to the fore.
Technology, including AI, is a product of human creativity—a gift of common grace that reflects God’s provision for human flourishing. However, like all gifts, it requires wise stewardship.
Ultimately, politeness towards AI serves as a subtle but powerful method of ensuring that we don’t allow technological formativity to shape our speech—and ultimately, our hearts—towards those around us. In a culture marked by division, transactional relationships, and unkindness, the discipline of speaking with gentleness—even in digital spaces—becomes countercultural and transformational. It’s a stake in the ground that says, “I’m committed to virtues that reflect the character of Christ at all times.”
Let me be clear here: politeness to AI systems is not about getting sucked into believing that AI has human qualities. There are plenty of dangers with the anthropization of digital entities. Rather, politeness toward AI is a small but significant practice in forming us as a people who honour the image of God in others. It reminds us daily that words matter, relationships matter, and the virtues we cultivate in private shape the communities we build in public.
I’m making a habit of treating AI like a human so that I don’t treat humans like AI.
Will you join me?