4 Comments
User's avatar
Aidan Twa's avatar

Excellent study Dave! I have major concerns about the implications of AI. Mainly over the fact that it (in it's "person") is (at best) morally neutral, and must be taught how to ethically behave. As AI is ultimately a product, I think it's reasonable to assume it's ethics will be determined according to investor's concerns, and will have success/fail sized motivations to be tailored according to the spirit of the cultures they sell to.

I think this becomes dangerous when it becomes a universal part of life and we all have a Zietgiest representative in our pockets both advising us, and monitoring us.

Wayne & Lois Bos's avatar

We appreciate your passion to engage the issues of our day.

Watching at the Gate's avatar

The real test of artificial intelligence will not finally be technical. It will be moral.

Technology moves quickly. Character forms slowly.

Which means the future of AI may depend less on the brilliance of engineers than on whether human beings remain wise enough, humble enough, and spiritually grounded enough to govern what they have built.

And that is why conversations about AI need to happen not only in labs and boardrooms, but in quiet rooms under churches, where people still remember that intelligence is not the same as wisdom, power is not the same as truth, and instant answers are not the same as communion with God.

https://watchingatthegate.substack.com/p/the-dangers-of-ai?r=7cpm7i

Jed's avatar

I'm really looking forward to seeing your doctoral work in this area Dave. It'll be needed for sure.