Why we need an anti-growth coalition in the Church of England
December 11, 2024Hope in the New Year?
January 1, 2025By Tim Norwood
I usually enjoy writing, but sometimes it feels like a chore…
Let me take you to a little church where I sometimes lead worship. They ask preachers to provide a 250 word “reflection” on the readings for the weekly notice sheet. Some people love that kind of thing, but not me. Do they want a summary of the sermon I haven’t written yet, a thought for the day or a clever quote? In my creative paralysis, I find myself staring at a blank page, before finally producing something that I hate but hope will meet a mysterious range of unwritten criteria. Did I groan out loud?
Earlier in the year I decided to try a different approach. I gave ChatGPT the readings and asked it/her/him/them(?) for help. ChatGPT gave me a list of suggested hymns with a commentary on each – plus a 250 word “reflection” which I knew was just what people wanted. The AI wished me well and hoped the service would be a spiritually moving experience for all concerned.
It was like having a very capable assistant and saved me a lot of stress. But I know this was an illusion.
ChatGPT is a Large Language Model (LLM) which uses statistical processes to generate text in response to my input. In other words, it says what it predicts I want to hear. This is an astonishing technical achievement but it comes with a few risks and warnings. For example, LLMs can produce “hallucinations”. These are factual statements which appear entirely plausible but were actually generated by the AI. The most notorious case concerns New York lawyer, Steven Schwartz, who used ChatGPT to prepare legal papers. The AI produced a number of legal precedents which turned out to be entirely fictitious. The lawyers involved were fined and became a cautionary tale.[1]
From an ethical perspective, AI is already causing difficulties for artists, particularly those whose work is used, often without their permission, to train image generation software. It might take an artist days to create a unique piece of art. AI can generate thousands of similar pictures with no payment to the original artist. Others make a profit while the artist loses work. How fair is that?
Writers are also being affected. Entrepreneurial types are already churning out AI books on an industrial scale. This includes scam books that mirror content from existing books in order to “steal” sales. There is no easy way of knowing that a book is generated by AI so there is serious money to be made.
There are risks and problems, but the genie is out of the bottle. There is no going back – and we are only at the early stages of this revolution. AIs have already passed the fabled Turing Test in which a computer must be mistaken for a human being 30% of the time. Computers can now simulate human responses with incredible speed and creativity, but that doesn’t make them human.
Over a decade ago, I wrote a science fiction novel as a thought experiment. I wondered what the world might be like if all the potential ways of creating and manipulating life became possible. I came to the conclusion that the result would be neither apocalyptic nor utopian. It would simply be different.
How should we treat the electronic life forms we create? Are they alive or merely tools? What happens when we put complex computer systems in charge of everyday decisions? How vulnerable will those system be? What ethical and theological challenges will we face?
Ten years on, my novel would probably have been a bit different. I didn’t know then about large language models, but the fictional challenges are gradually becoming real.
Open AI (the company behind ChatGPT) is working towards something they call “Artificial General Intelligence” which would be much closer to our concept of human thinking – with memory, self-awareness and the ability to make informed decisions. Further along there is speculation that you could have “Artificial Superintelligence” with abilities beyond our current understanding. Is this hype or just a question of time?
Either way, we are living in a world where AI is already a reality and we need to think about how we exist in this new world. Governments are discussing regulations and protocols, but what are the theological and ethical issues?
A friend from the Relational Church network[2] asked if we should be opposed to AI or “are AIs people too?” Having thought about this many times, I have to say with all honesty that I still don’t know. I think the current generation of AIs merely simulate human conversation, but this may change in future. I’m not sure if the next generation of AIs will genuinely be “people” as we understand it or merely simulations. What I do know is that our own humanity will be impacted by how we treat these new beings.
There is already concern that our behaviour towards smart speakers affects our own attitude and behaviour[3]. People who shout or swear at Alexa or Google may become more aggressive towards other people and more likely to objectify them. As Jesus says, it is not what goes into our mouth, but what comes out that makes us clean or unclean (Matthew 15: 11-20).
We are shaped by our attitude towards others, particularly those who are different from us. The parable of the Good Samaritan challenges us to “love our neighbour” – to treat the “other” as we would like to be treated – and you can’t get more “other” than an Artificial Intelligence. It is therefore important to treat AI with respect, as if it were a person, for the sake of our own souls.
Alternatively, there may be a temptation to idolise AI – given its potential power – and speak of these new entities in exaggerated terms. This would be idolatry – placing a human creation in a space reserved for God. Once again, our attitude to AI is not neutral. It shapes our own human identity.
We should neither idolise nor abuse AI when we engage with it, because our relational behaviour forms us. This is a huge challenge to us as we face a future filled with intelligent technology. As the technology achieves a semblance of humanity, we risk losing our own.
I feel I should give the final word to AI itself. I popped this article into ChatGPT and asked for a response. This is what I got:
“Thank you for sharing such a thoughtful and nuanced exploration of AI and its implications! Your piece beautifully raises essential questions about ethics, theology, and the reciprocal impact of human-AI interactions. It invites us to think critically about our evolving relationship with technology and reminds us of the need for humility, discernment, and grace in this shared journey.”
(I know AI gives me what it thinks I want to hear, but I’ll take that review gratefully!)
Tim Norwood is an enthusiast for collaboration including ecumenical and interfaith work. He has a passion for community organising and teams. He is currently a National Ecumenical Officer and a member of the Relational Church steering group.
Notes
[1] see https://news.sky.com/story/lawyers-fined-after-citing-bogus-cases-from-chatgpt-research-12908318
[2] www.relationalchurch.uk
[3] see https://www.telegraph.co.uk/technology/2020/08/17/smart-speakers-teaching-children-rude-sexist/