National News

Anglicans wrestle with potential uses, pitfalls of artificial intelligence

Published by
Sean Frankling

All across the bustling show floor at Collision 2024, startup companies display laptops, flyers and signs promising to apply artificial intelligence (AI) to security, data analytics, online shopping, code optimization—on and on it goes. Between quirky intro and outro music, panels of experts at the North American tech expo give prognostications ranging from the optimistic (AI will drastically increase productivity) to the very optimistic (AI will boost longevity to the point of making everyone immortal) to the catastrophic (AI will interfere with democracy or become a threat to human life)

Vinod Khosla, a venture capitalist and major investor in OpenAI, speaks at the Collision trade conference, held in Toronto June 17-20. Photo: Sean Frankling

Vinod Khosla, a venture capitalist and heavy investor in leading AI research and development firm OpenAI, tells an audience he expects computer learning tools to take over from human experts entirely within a decade.

“Whether you’re talking about a primary care doctor, a mental health therapist, a structural engineer, an oncologist, a salesperson, a chip designer—every one of these expertises is an opportunity for some startup here,” he says.

Meanwhile, skeptics, including scholars Timnit Gebru and Emily Bender, have argued that both utopian and nightmare AI predictions loom larger than they should, because AI industry leaders have exaggerated the technology’s abilities and sidestepped its limitations.

As secular techno-optimists and doomsday prophets argue with skeptics about the promise and peril of AI, the tools AI companies have turned out so far are already a source of controversy among Anglicans in Canada. Some in the church are eager to embrace the use of auto-generated text, for example, as part of daily ministry work, while others raise serious misgivings about the meaning to be found in text generated by a merely probabilistic process.

The possibilities for using a tool that can quickly search documents to summarize or generate text are numerous in a parish setting, says the Rev. Tay Moss, a priest at St. John’s Norway and director of online church learning platform CHURCHx. Moss uses customized tools based on ChatGPT, a popular chatbot from OpenAI, for everything from research to quickly looking up daily liturgy and readings to sermon drafting to generating Bible study questions. He likens the process to having a conversation with a theology student: they might not get everything right, but using the chatbot to ask questions and prompt responses is a way to spark inspiration and work out ideas.

Moss walked the Journal through how he uses ChatGPT to look up information on mustard seeds for a potential sermon on their significance in parables, then asked the system to generate a prayer for farmers of mustard seeds and demonstrated how it could reword a sermon to be easily understood by children.

“I wouldn’t necessarily use this as the final product, but … this is a very helpful tool to be able to very quickly generate ideas one could riff on,” he says.

Moss has also worked with The Episcopal Church, the Anglican Church of Canada’s counterpart in the United States, on an AI chatbot called AskCathy.ai which, trained on more than 1,000 pages of documents on church policy and theology, allows users to ask questions about matters of faith. This application is perfect for people who may be curious about church life but either do not have access to qualified clergy or are nervous about approaching them, he says.

Between its June 2024 launch and mid-September, Moss tells the Journal, “Cathy” had processed 3,147 conversations averaging about 10 messages each. It had fielded pastoral questions, had exchanges with users who approached it with hostile language and debated difficult questions about theology, he says.

Testing AskCathy.ai out, the Anglican Journal found it was able to provide general answers about topics including adultery, evil spirits and more specific answers about The Episcopal Church’s policies on the inclusion of transgender people. When asked why anyone should believe the Bible is true, however, it replied that some people choose to believe the Bible based on faith or the fact that others have believed it for a long time, and added that ultimately it’s a personal choice everyone has to make for themselves.

In a training course produced in association with the World Association for Christian Communication, Erin Green, a theologian and communications lecturer at Thomas More University in Belgium, argues churches and other nonprofits should get to work now on guidelines for responsible use of AI in their communications and other work as the technology grows more prominent. “There’s huge issues with privacy, ethics and bias, but this is the ecosystem in which we already live. This is already a fact of life,” Green says in the online course.

Organizations must decide for themselves what their priorities are when drawing up a policy, she says, but there is a wealth of resources available to show how others have approached the question so far. As one example, she holds up the treaty-based approach developed by Maori tech ethics researcher and doctor of Indigenous studies Karaitiana Taiuru, whose 2020 document on tech ethics emphasizes the Maori’s sovereignty over their data and their right to control its “creation, collection, access, analysis, interpretation, management, security, dissemination, use and reuse.” As a potential resource for organizations putting together their own AI policy, Green points in the course to a rich list of policies from around the globe on the website of the Organisation for Economic Co-operation and Development at OECD.ai, including a list of values-based principles and recommendations for policy makers. She encourages organizations to especially consider seeking out underrepresented perspectives.

At the end of the course, Green presents a series of ten beatitudes for AI she had asked AI tool Ryter to generate.

“Blessed are the algorithms that create beauty and inspire creativity,” it begins. “Blessed are the models that bring forth understanding and enlightenment.”

Green’s course also highlights some potential uses of AI for generating communications materials. She describes how she used AI tools to help with graphic design and to create prompts for a full year’s worth of LinkedIn posts for a friend’s wool goods business, demonstrating how to use artists’ names to ‘flavour’ generated images with their style.

She also warns users to beware a variety of pitfalls in AI results, ranging from biased training data and developers’ influence on depictions of vulnerable cultural groups to the tools’ potential for spreading misinformation. However, while the course demonstrates what such results might look like, it provides few strategies for correction beyond simply watching for problematic output and trying to prompt around it.

Green also mentions one of the several active and prospective class action lawsuits in the U.S. and Canada that centre on AI. One type of AI tool used to generate text is the large language model (LLM). LLMs analyze vast bodies of “training text”—text, often scraped from the internet, and fed to the AI to teach it patterns—and produce human-sounding answers to user prompts. They produce these answers by calculating what word is likely to come next in a sentence based on the analysis they’ve performed of the training text, often running prompts and results through multiple layers of processing tuned for subject matter expertise, ethical guardrails and other refinements. In the current lawsuits, writers, artists and private citizens object to the nonconsensual scraping of their work and personal data from the internet and its use to profit others without asking or paying them.

Green declined a request by the Journal for an interview or comment on AI and copyright issues.

Kieran Wilson, who sits on the national council of the Prayer Book Society of Canada, says his organization is trying to be open-minded on AI tools. But for the moment, he says, he and the society’s other leaders only trust it on very simple, straightforward tasks, like typesetting a bulletin or looking up psalms and hymns. Tasks that involve generating prayers or giving advice are less appropriate for AI, he says.

Of AskCathy.ai, he says, “I think we have pretty serious concerns about something like that because AI doesn’t have spiritual discernment. We’ve seen a lot of recent stories about the possibility for AI to hallucinate, which is basically just to make up information, telling people what they want to hear without being able to discern the truth.”

Anglicans believe clergy receive divine graces to perform the task of spiritual discernment when they are ordained, he says. And while that’s not a guarantee they always get it right, he argues, at least when they get it wrong, they can be held accountable. When Christians use church-approved liturgical texts, he says, they are participating in the embodied, baptismal life of the Church. He questions whether the output of disembodied software can add anything meaningful to that process, even though it may be grammatically convincing.

Like Wilson, American author and writing teacher John Warner is profoundly skeptical of treating the output of LLMs as meaningful writing. His upcoming book More Than Words examines the lessons the emergence of AI can teach about how students are taught to write. When a person sits down to write, he says, they have an idea in mind which they intend to convey to a reader using words as a medium, which he argues is fundamentally distinct from the process of probabilistically calculating responses based on input.

“What happens when humans write has no relationship to what happens when those things generate syntax,” Warner says.

In May of this year, Archbishop of Canterbury Justin Welby signed his name to the Rome Call for AI Ethics, a document created by the Roman Catholic church’s Pontifical Academy for Life. The Vatican says the document is aimed at fostering a sense of responsibility between organizations, businesses and governments to ensure everyone benefits from the development of the new technology and that its development is administered in a way that respects human dignity.

Echoing a common theme from his teaching on technological equity, Welby said AI “cannot be the sole property of its developers, or any single part of the human race.

Related Posts

Published by
Sean Frankling