Normal view

Received before yesterdayAI and Faith

An Introduction to Access Technology for the Faith Community

24 October 2025 at 20:22

Access technology offers people with disabilities adaptive tools to use computers, smart phones and other devices. I am a blind Catholic professional with experience in academic political science. I also have training and program management experience in access technology; helping other blind and low vision users solve difficulties with their devices. As I have engaged with AI and Faith, I have noticed that the community has few current links with the conversation around accessibility, and I hope this article will begin to change that.

I will be focusing on the types of access technology I know best: screen readers and dictation. Screen readers (known as text to speech) allow the user to hear the device talking to them. Screen readers usually require the user to have some level of comfort with keyboarding or use of a touch screen. Dictation (known as speech to text) allows the user to talk to the device. Users can optionally receive a vocal response during the dictation process. To better understand the difference, note that for an iPhone, Siri is entirely voice activated while Voice Over requires the use of a touch screen by the user. Smart phones have built-in settings which allow more seamless integration of screen readers and dictation than is present on computers, for those who have a high comfort level with them (Voice Over for Apple, and Talkback for Android). Blind users often use a combination of screen readers and dictation when using AI. Because AI applications often have their own dictation abilities which also offer voice feedback, there are more options for those less comfortable with screen readers.

I hope that future articles by others more qualified will delve into access technology issues with other disabilities; adaptations for those who cannot use their arms, closed captioning for the deaf and hard of hearing, magnification and contrast for those with low vision, and bioethics issues around artificial body enhancements and Neurolink.

History of faith and accessibility

One of my reasons for interest in the conversation between faith and accessibility is that faith has already played a major part in uplifting people with disabilities; in particular, advancement for the blind. Technology originally for the blind has greatly impacted technology for all, as detailed in a great chapter of Andrew Leland’s book Country of the Blind. Louis Braille (1809-1852), the blind inventor of the braille reading code, was a devout Catholic who used his invention to create a larger library of sheet music for blind church organists. Religious groups took a leading role in producing and distributing braille books throughout the twentieth century, including the Xavier Society (of which I am a board member), the American Bible Society, and the Theosophical Society. Braille’s ingenuity and his attempts to develop an early version of the typewriter tested the boundaries of language and technology. Audio books, which were initially produced for the blind, are now used by many sighted readers, and many early audio books were religious.

Image description and faith

One of the lesser known uses of AI is its ability to describe images. You can share a picture or an inaccessible file with an AI application and it will provide information about what is in the image including any discernable text, along with the ability to ask further questions and share the image with another application or another person. The more common AI tools can describe images, but many of us in the blind community prefer to use apps built for the blind, including Microsoft’s Seeing AI and the blind-founded Be My Eyes. These apps predate the development of what most people think of as AI; Be My Eyes started off as an app to call human remote volunteers, while Seeing AI initially focused on reading labels; but they both received major updates in 2023.

The use of image description to benefit people of faith are numerous: from gaining a practical orientation of a sacred space, to providing a better understanding of religious art than blind people have had before. In my experience, AI applications can correctly identify the names of religious items, but continued collaboration is necessary to make sure models do not contribute to subtle misinterpretations.

Research and writing tools

Accessible AI tools allow blind users to research questions about religious doctrine, scripture, history, prayers, and current events, whether for personal study or professional work. The most common AI tools like ChatGPT and Gemini have accessibility teams which use WCAG and ARIA accessibility standards. One of these is the use of headings, especially for computer users. If I press the “H” key on my computer, I can move between my prompt and the various sections of the AI’s response. Buttons to copy, share, or download a file are also relatively easy to find.

I have used AI to shorten the process of finding traditional Latin mass propers that I sing in my Church choir. As for writing, I have found ChatGPT’s ability to generate a prayer plan based on a particular faith to be helpful. Of course, like anyone else, screen reader users need to avoid pitfalls of AI-driven research that come from asking the wrong questions, and hallucinations.

One project that needs further work is making sure smaller apps designed for a particular religious viewpoint are accessible. Many of them, in my limited experience, are mostly navigable but could use improvements for better user experience, especially making certain elements more clearly labeled.

Where do we go from here? Bridging Ethics and Accessibility

I will conclude by noting that like any other group, blind people (and the smaller group of blind people who identify with a religious faith) will have a variety of opinions about AI. Some of these are influenced by our life as blind people, but also come from our other deeply held personal and intellectual commitments. As a young father, I want to limit my children’s exposure to AI at an early age, primarily because it contributes to a preexisting problem of too much time spent in the virtual world. I am concerned with over-reliance on AI among students and others who need to continue developing their skills in critical thinking and various content areas. I think we should encourage our religious leaders to avoid using AI to write sermons; rather, it should be used for background research only.

Accessible AI has opened the world of information to blind people, in some ways building on the successes of search engines and human curated projects like Wikipedia (which I was an admin for when I was a teenager). I do not want accessibility to be the reason that someone does not use AI, even if it is for a purpose I personally disapprove of.

I look forward to continuing the conversation; I’m happy to receive emails (covich7@gmail.com) and LinkedIn messages with any thoughts, especially about improving religious apps.


Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.

The post An Introduction to Access Technology for the Faith Community appeared first on AI and Faith.

Can AI and Faith-Based Hope Coexist in a Modern World

23 October 2025 at 20:55

“… hope does not disappoint, because the love of God has been poured out within our hearts through the Holy Spirit who was given to us.” Romans 5:5 NASB

Artificial intelligence isn’t a guest visiting for a season, it has moved in and set up shop. It lives in our phones, churches, hospitals, and homes. It curates our playlists, predicts our spending, suggests our prayers, and sometimes even writes our sermons. Coexistence, then, is not optional. The question is whether we can coexist in a spiritually healthy manner, one that deepens our humanity rather than dilutes it.

To coexist faithfully means to let neither fear nor fascination rule us. Fear convinces us that AI will replace us; fascination tempts us to let it. Both miss the point. People of faith are called to live alongside technology with discernment and humility, resisting both the illusion of control and the despair of irrelevance.

For all its predictive brilliance, AI cannot pray, weep, or wonder. It can mimic compassion, but not surrender. It can analyze human emotion, but not experience it. The Franciscan imagination reminds us that creation, including the human-made world of code and circuitry, is still part of God’s world. But only humanity bears the capacity for soul, for longing, for love that suffers and redeems.

Coexistence, then, is not a negotiation with machines, it is a spiritual practice among humans about how we use them.

1. Hope as Surrender, Not Optimism

Faith-based hope is not the same as optimism. Optimism is a weather forecast; hope is a covenant. Optimism predicts outcomes; hope surrenders them.

In the Franciscan tradition, hope emerges not from certainty but from trust, trust that divine love continues to work even in confusion and disruption. As St. Francis taught, we find God not in control but in relinquishment. Hope, for Francis, was not a rosy confidence that things would turn out fine, but the willingness to walk barefoot into the unknown, trusting that God’s presence would meet him there.

When we mistake AI’s forecasts for faith’s hope, we confuse data confidence with spiritual trust. An algorithm might predict recovery rates for the sick or estimate climate outcomes for the planet. These forecasts can be useful, even inspiring, but they can’t teach us how to sit with grief, how to pray through uncertainty, or how to love what we may lose.

Hope begins where prediction ends. It is born when we choose faithfulness over control, willingness over willfulness. The AI age tempts us to measure everything, to optimize, to manage risk, to secure results. But the Franciscan path teaches that surrender is not passivity; it’s the deepest form of participation. It is the art of letting God’s grace do what our grasping cannot.

2. Solidarity as the Face of Hope

Hope in the Christian imagination is never solitary. It is, as the prophets declared, born in community. Hope is sustained not by certainty but by companionship. The Franciscan way calls this being with rather than doing for.

Solidarity is where hope breathes. It is incarnational, embodied in listening, touch, and shared presence. In this light, AI can make hope more accessible and actionable by connecting communities across distance, revealing hidden needs, or amplifying marginalized voices. It can process massive amounts of data to show us who is being left behind. It can remind us, through pattern and prediction, that our neighbor is closer than we thought.

But solidarity must remain human. A chatbot can send comforting words, but it cannot keep vigil at a bedside or shed tears that sanctify suffering. Yet it can free human caregivers from administrative burdens so that they can show up in love. When technology serves relationships rather than replaces it, it becomes a partner in the work of hope.

Francis of Assisi would recognize this: the holiness of proximity. To “be with” creation and each other is the heart of hope. Even the best-designed algorithm cannot incarnate presence. It can only point toward it. And perhaps that is its highest ethical calling, to remind us of what only we can do.

3. Prophetic Hope in Disruption

The Hebrew prophets: Isaiah, Jeremiah, and Amos—offered hope not in comfort but in collapse. They dared to believe that God’s newness could rise from ruins. Walter Brueggemann calls this “the horror of the old collapsing and the hope of the new emerging.”

Our era’s disruptions: climate change, displacement, and digital isolation find a mirror in the age of AI. The prophetic task is not to resist technology outright but to reclaim its direction. Faith communities have a prophetic imperative to ensure that AI serves justice, mercy, and shared flourishing.

AI can go beyond prediction when it feeds real hope: when it exposes injustice, reveals truth, or helps imagine new economies of care. Imagine algorithms that prioritize the hungry over the profitable, or systems that help restore ecological balance rather than exploit it. Prophetic hope transforms technology from a mirror of power into a window of possibility.

Yet prophecy always begins with lament. We must name the pain of our age, the loneliness, the disconnection, the temptation to substitute simulation for presence. In naming it, we keep it human. The prophets of Israel didn’t offer quick solutions; they offered faithful witness. Likewise, our hope for AI is not that it will save us, but that through it, we might rediscover what needs saving: our compassion, our humility, and our sense of shared destiny.

4. A Future Worth Coexisting With

To coexist with AI faithfully is to remember that intelligence is not wisdom, and power is not love. AI may analyze vast datasets, but faith invites us into mystery, the space where surrender becomes strength and community becomes salvation.

A spiritually healthy coexistence doesn’t idolize AI nor exile it. Instead, it consecrates the tools of our age for the service of God’s reconciling work. Technology, like fire or language, can both heal and harm. Our task is to keep it lit with compassion, humility, and justice.

This is not nostalgia for a pre-digital past; it is a call for moral imagination. Coexistence means insisting that progress must serve presence, that algorithms must bend toward mercy, and that the ultimate measure of intelligence is love.

The Franciscan tradition, with its emphasis on humility and relationality, offers an antidote to the empire of efficiency. It invites us to see AI not as a rival intelligence but as a mirror reflecting what we value. The question is not, “Can AI love?” but “Can we?”

Conclusion: The Stubborn, Sacred Hope

Artificial intelligence can calculate probabilities, but it cannot kindle hope. Hope is the province of the soul, the stubborn, sacred belief that life can be renewed even when the data says otherwise.

If we approach AI with humility, we may yet find that it sharpens our awareness of what is uniquely human: our vulnerability, our longing for connection, our capacity for grace.

In the end, coexistence with AI is less about technological control and more about spiritual formation. The future worth coexisting with will be one where our tools amplify love rather than efficiency, justice rather than profit, and wonder rather than fear.

Machines may forecast the future, but only people of faith can hope their way into it.


Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.

The post Can AI and Faith-Based Hope Coexist in a Modern World appeared first on AI and Faith.

AI and Faith Welcomes New Expert

26 September 2025 at 19:55

Neylan McBaine (Advisor) is a dynamic leader who transforms bold ideas into movements, blending corporate strategy with a deep commitment to cause-driven campaigns that inspire change, mobilize communities, and reshape culture. She most recently led interfaith and policy partnerships at Project Liberty, including the passage of the Digital Choice Act in Utah which mandates data portability and interoperability among social media platforms. Neylan has founded two non-profits dedicated to elevating women of faith and has led her own software company serving independent music teachers. A graduate of Yale University, she is the mother of three young adult daughters.

The post AI and Faith Welcomes New Expert appeared first on AI and Faith.

Artificial Intelligence in the Medical Field from a Priestly Viewpoint – Part One

26 September 2025 at 14:52

I am not a doctor (medical, or otherwise). I am a cleric with clinical training and forty-plus years of parish experience. I am also a researcher dedicated to studying postmodernism. This includes considering the place of artificial intelligence (AI) and robotics in society.

Before you glance away, thinking this is fluff or irrelevant, take a moment for “deep reading.” This is an article rooted in my experiences, research, and reflections. Please do not fall into the trap of our evolving short attention span and move on. Take a few minutes to read and think. In our postmodern society we are constantly overloaded, overstressed, overwhelmed, and tend to “scan and skim” rather than devoting effort to read beyond surface content and delve into the context of the material. We may read enough to get a gist of an article, journal, or book and then stop. We are impatient because we have dozens of other tasks in front of us.

Facets of postmodern society, particularly in the medical world, are often enabled by AI models. These models are fed by algorithms, autocorrections, and auto-suggestions, and becomes an ally, which may leave us thinking in terms of Lord Acton’s profound image of absolutes who, in a 1887 letter to an Anglican bishop, wrote “power tends to corrupt, and absolute power corrupts absolutely.” AI is becoming a vastly powerful ally tool in the medical community. Does that mean we are allowing or downloading previous human activity to a machine? The answer is yes, but we should be aware of potential consequences.

AI and robotics in surgery and inpatient treatment are increasingly part of our reality. As an clear example we see offices send emails, texts, or phone calls to “nudge” people into remember their appointments, or tracks inventory for operating rooms, etc. Based on prior successes, we have increased expectations of what AI and robotics can provide. Our postmodern peculiarities drive us towards impatience, and instant gratification, and we want perfect care and perfect results in ten minutes. This puts pressure on all levels of the medical community.

In our consumer-driven lives we may become irritable with those providing medical care. One recent article notes:

“Violence against physicians is a serious problem worldwide and is increasing day by day. For this reason, physicians have difficulty fulfilling their duties, are exhausted, migrate abroad, or move to the private sector. As a result, it becomes challenging to maintain healthcare services in the public sector.”6 Doctors feel the pinch from a demanding public and become impatient also. Canadian psychiatrists Robert Maunder and Jonathan Hunter wrote that doctors once waited 18 seconds before they interrupted a patient’s story. This duration has now shortened to about 11 seconds.7

Overloaded schedules, mountains of paperwork, numerous patients, an aging population with greater needs, and too few doctors are all pushing us to a point where AI is on the road to greater use in medical care. Financial restrictions by governments and hospital boards have played a part to limit staff and costs, and to look to AI. You may not agree with this statement, but one cannot deny we are on that path!

One evolving use of AI is that we may even need fewer human doctors and staff. Today we have live doctors on video screens, but perhaps in a couple of years we may see AI-powered doctors, where the doctor we see on screen is an AI-generated face. Given our consumer need for gratification, we might be able to pick the “face” of the doctor we desire. We may choose to see from a selection of television characters such as Dr. Grey (Ellen Pompeo), Dr. Doug Ross (George Clooney), or Tom Cruise as an animated doctor. Seniors may choose to see Dr. Ben Casey (Vince Edwards), or Dr. Kildare (Richard Chamberlain). Gen X folks may select Dr. Doogie Howser (Neil Patrick Harris), or Dr. Shaun Murphy (Freddie Hightower). Some may choose the grumpy condescending Dr. Gregory House (Hugh Laurie).

To continue this currently absurd satire each of these AI-manufactured faces could offer medical advice from the whole world’s complete, combined, and documented medical records. In other words, in nanoseconds they can access all the procedures – pro and con – that seemingly address the specific patient before them. In time could “big brother” exist? Time will tell. In three words, AI would have “total evidence awareness.” In this possible AI scenario, there may be robotics in the room with the manufactured video doctor. These robots may be programmed to undertake many tasks such as drawing blood, checking blood pressure, providing x-rays, or more. Perhaps in a decade or two, they might do minor surgery. We should also consider if medical care should be driven by consumer demands or by medical necessity. AI can possibly assist the medical world by collecting, and, perhaps at the next step of development discern incredible amounts of data, churning it into useful information, and then into knowledge. Often, we assume that wisdom naturally flows from knowledge. Unfortunately, this is not always true. We are blessed that AI and robotics may someday in the distant future, this priestly futurist writes, may offer wisdom and create diagnoses.

In our impatient world, we need to learn patience. We need patience with others and patience with ourselves. In our instant gratification world, we need to take time to look for the bigger picture and to wait in anticipation for results. Perhaps if we find a balance between AI, algorithms, robotics, self-awareness, and inward journeying we may move forward in healthy partnerships.

No AI was used to write or re-write this article. AI was used to help with grammatical errors. A human medical author greatly added insights and proofreading to this piece.


References

  1. Yücel Özden, K.B., Sarıca Çevik, H,, Asenova, R., Ungan, M. (2024). Guardians of health under fire: Understanding and combating violence against doctors. Aten Primaria. 2024 Sep;56(9):102944. doi: 10.1016/j.aprim.2024.102944. Epub 2024 Apr 27. PMID: 38678853; PMCID: PMC11066614. Retrieved October 10, 2024.
  2. Maunder, J., Hunter, J. (2021). Damaged: Childhood Trauma Adult Illness, and the Need for a Health Care Revolution. Toronto: University of Toronto Press., p. 62.

Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.

The post Artificial Intelligence in the Medical Field from a Priestly Viewpoint – Part One appeared first on AI and Faith.

How Religious Leaders Evaluate and Use Technology: Insights from Microsoft Research #45

25 September 2025 at 21:28

This summer, Nina Lutz, a PhD student at the University of Washington, joined Microsoft Research to interview nearly 50 religious leaders on how they approach technology in both ministry and personal spiritual practice. Her work, part of the new Technology for Religious Empowerment (T4RE) initiative, brings unique faith-based perspectives into the software creation process. In this podcast, Nina shares the surprising insights she uncovered from her conversations.

Meet our Speakers:

Host: David Brenner currently serves as the board chair of AI and Faith. For 35 years he practiced law in Seattle and Washington DC, primarily counseling clients and litigating claims related to technology, risk management and insurance. He is a graduate of Stanford University and UC Berkeley’s Law School.

Guest: Nina Lutz is a PhD student at the University of Washington. Her research focuses on digital safety and participatory visual culture as these things intersect during high stakes events like elections and with intimate personal experiences, like immigration and religion. This summer, via her Microsoft Research internship, she conducted a landscape analysis of how religious stakeholders conceptualize and evaluate technology and relationships with technology in their daily and religious lives. She also led the first of its kind digital safety exercise of interfaith red teaming at Microsoft Research with Seattle area faith leaders.

Resources:

Views and opinions expressed by podcast guests are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.

Production: Pablo Salmones and Penny Yuen
Host: David Brenner
Guest: Nina Lutz
Editing: Isabelle Braconnot
Music from #UppbeatLicense code: 1ZHLF7FMCNHU39

Listen to more episodes here.

The post How Religious Leaders Evaluate and Use Technology: Insights from Microsoft Research #45 appeared first on AI and Faith.

💾

Exploring AI, Faith, and War: Interview on “¡Qué Tal Fernanda!

10 September 2025 at 21:00

Artificial Intelligence, war and religion—three worlds that are increasingly intersecting. In a major step toward expanding its presence in Mexico and Latin America, AI and Faith took center stage in an exclusive interview on ¡Qué Tal Fernanda!, one of the most influential talk shows in Mexico.

Broadcast daily for over 20 years across radio, television, and digital platforms, ¡Qué Tal Fernanda! is led by renowned journalist Fernanda Familiar, who boasts over one million Twitter followers. The interview was hosted by Fernanda Familiar and Pablo Ruz Salmones and featured Shannon French, the Inamori Professor in Ethics and the Director of the Inamori International Center for Ethics and Excellence at Case Western Reserve University, diving into the thought-provoking relationship between artificial intelligence, warfare and religious beliefs.

¡Qué Tal Fernanda! is broadcasted by Grupo Imagen Multimedia, one of Mexico’s most prominent media conglomerates. As such, this interview marks a significant milestone for AI and Faith in the region. Don’t miss this insightful conversation that bridges technology and spirituality in an ever-evolving world


Views and opinions expressed by interviewees are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.

The post Exploring AI, Faith, and War: Interview on “¡Qué Tal Fernanda! appeared first on AI and Faith.

A Pope For The Age Of AI

29 July 2025 at 22:40

Speculation regarding how Pope Leo XIV will lead the Catholic Church has been rampant. Amid this lack of clarity, one of his priorities has come into focus: artificial intelligence (AI). Since assuming the papacy, Leo has begun making the case that the Church should be prepared to address the broader impacts of this advanced technology. For instance, in a meeting with the College of Cardinals shortly after his election, the pope raised that “the treasury of its social teachings” would be available to those concerned by AI’s growing ubiquity in day-to-day life.3 Through statements like these, Leo has shed light on what he plans for the Church’s future, namely for it to act as a moral arbiter in discussions surrounding this technology’s place in society.

Although Leo is attuned to the modern challenges presented by AI, his perspective on these problems has been shaped by past pontiffs. He has drawn inspiration from his namesake, Leo XIII, who advocated for the interests of workers displaced by technology amid the Industrial Revolution. Additionally, he has followed in the footsteps of his immediate predecessor, Francis, whose championing of responsible technology use was a core component of his late papacy. Leo will cite this tradition at a time when AI discourse among world leaders has centered on its implications for security, rather than its potential for harm. He has the opportunity to change the conversation, all while reasserting the Church’s role as a voice for those swept away by the undertow of disruptive innovation.

Looking Backwards, Moving Forwards

Among keen observers, there were early indicators about what might inform Leo’s social policy. For instance, his choice of name, an homage to Leo XIII, made clear that labor rights would be at the forefront of his agenda. The Conversation drew attention to how Leo XIII’s Rerum Novarum, a landmark encyclical which decried the wealth inequalities that defined the Industrial Revolution, has become a foundational text for Catholic thinkers dedicated to economic justice.4 The papal letter noted that workers are excluded from the benefits of their labor by their employers, leaving many marginalized and exploited. To remedy this problem, the 19th-century pontiff charged that workers must share in the ownership of their labor. This solution would not only help close economic disparities which divided classes. It would protect the dignity of laborers at a time when their rights were superseded by the interests of industrialists.

The challenges Leo XIII grappled with remain relevant well over a century later. Even so, Leo has the opportunity to adapt his namesake’s teachings for this moment, pulling from them to argue that workers should not be treated as an impediment to progress. America Magazine highlighted how Leo XIII’s commentary on the separation of workers from their labor might serve as a lodestar for Leo as he refines his philosophy.5 The impulse to defend the laborers whose contributions make innovation a reality, while ensuring they can reap the rewards of progress, feels especially pertinent to our era. AI’s capacity to reflect human knowledge is impressive, and has prompted employers across industries to consider replacing workers they deem “expendable.” Leo may outline a different path, reminding developers that their products are not only made possible by the efforts of their employees, but should serve as a tool to help them in their work.

Following Francis’ Footsteps

Leo can also look to the recent past for insight on how the Church should approach AI. His mentor, Francis, made a point of engaging with Silicon Valley throughout his tenure. Vox noted how the pontiff saw this as a necessity, particularly if the institution he represented was to remain relevant.6 This forward-looking strategy, from convening cross-sectoral conferences to organizing student hackathons,7 challenged the long-held notion that the Holy See ignores technological progress.8 Importantly, this course of action enabled the Church to join debates surrounding technology’s use for good, promoting the interests of humanity against the realities of capitalism. When confronted with moral questions, such as what to do when novel innovations entrench the power of economic elites, Pope Francis was well-positioned to offer moral guidance on how to uphold human dignity. Ultimately, his advocacy in this space created a niche by which the Church could amplify its social and moral teachings.

Francis’s perspective on the importance of responsible technology development is best encapsulated in his doctrinal note, Antiqua et Nova.9 Word on Fire unpacked how this document, which specifically touches on the deployment of AI, captured Francis’ optimistic, but tempered, stance on technology.10 From modernizing education to transforming healthcare, the pontiff acknowledged that AI has immense potential to better the lives of people across the globe. However, he also conceded that AI could worsen inequalities, especially if a well-resourced class continued to monopolize its development. With this in mind, Francis posited that discernment would be key when deciding how to interact with AI. Knowing when AI could be appropriately used to help people, and how this technology could be exploited by those unaccountable to the public, should guide us moving forward. This invaluable lesson may also be useful to Leo as he wades into the same debates which preoccupied his predecessor.

The Road Ahead For Rome

In understanding the thinkers who shaped Leo’s worldview, it becomes easier to see how he looks at the subject of AI. Protecting the rights of workers, particularly those whose livelihoods are threatened by this technology, is likely to be central to his approach. He may also emphasize how the public at large has contributed to AI’s growth, contrasting it with the wealth that has been amassed by elites in the space. Critically, the pope will lobby that AI must advance the social good, and he will undoubtedly call on developers to create tools that respect the rights of all people. To accomplish these goals, he must prioritize engagement with those responsible for creating AI models. His recent audience with leading executives, where he called for tighter regulations on AI tools, is a strong indicator that he is ready to put these lessons into practice.11

An advocate comfortable using their influence to press for the ethical and responsible use of AI is needed now more than ever. Within the technology policy space, there has been a gradual shift away from AI safety to AI security, largely driven by geopolitical considerations. Policymakers who raised alarms that AI could disproportionately impact disadvantaged groups are no longer in the spotlight . In their stead, a new crop of officials have stepped in to turn the AI conversation on its head, shifting focus to how this innovation could strengthen national security.12 At this critical juncture, a new voice is needed to elevate these overlooked concerns. Grounded in the past, yet aware of the challenges facing society at this moment, Leo may be the right choice. Although much remains to be seen, his background provides hope among those passionate about the use of AI for the betterment of humanity.


References

  1. Winfield, Nicole. “Pope Leo XIV Lays out Vision of Papacy and Identifies AI as a Main Challenge for Humanity.” AP News, May 10, 2025. https://apnews.com/article/pope-leo-vision-papacy-artificial-intelligence-36d29e37a11620b594b9b7c0574cc358.
  2. Schneider, Nathan. “19th-Century Catholic Teachings, 21st-Century Tech: How Concerns about AI Guided Pope Leo’s Choice of Name.” The Conversation, May 21, 2025. https://theconversation.com/19th-century-catholic-teachings-21st-century-tech-how-concerns-about-ai-guided-pope-leos-choice-of-name-256645; Leo XIII. “Rerum Novarum: Encyclical of Pope Leo XIII on Capital and Labor.” The Holy See, November 15, 2019. https://www.vatican.va/content/leo-xiii/en/encyclicals/documents/hf_l-xiii_enc_15051891_rerum-novarum.html.
  3. Dunch, Matthew. “Pope Leo XIV Can Bring Catholic Social Teaching into the A.I. Age.” America Magazine, May 15, 2025. https://www.americamagazine.org/faith/2025/05/15/pope-leo-rerum-novarum-artificial-intelligence-250689.
  4. Samuel, Sigal. “The New Pope Has Strong Opinions about AI. Good.” Vox, May 28, 2025. https://www.vox.com/future-perfect/414530/pope-leo-ai-artificial-intelligence-catholicism-religion.
  5. “VHacks”, 2019. https://www.vhacks.org/
  6. Duncan, Robert. “Pope Francis Speaks to Silicon Valley CEOs and Moral Theologians about Dangers in Tech Revolution.” America Magazine, September 27, 2019. https://www.americamagazine.org/politics-society/2019/09/27/pope-francis-speaks-silicon-valley-ceos-and-moral-theologians-about; Sherwood, Harriet. “Vatican Hosts First Hackathon to Tackle Global Issues.” The Guardian, March 7, 2018. https://www.theguardian.com/world/2018/mar/07/vatican-hosts-first-hackathon-to-tackle-global-issues.  
  7. Francis. “Antiqua et Nova: Note on the Relationship Between Artificial Intelligence and Human Intelligence.” The Holy See, January 28, 2025. https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_ddf_doc_20250128_antiqua-et-nova_en.html.
  8. Umbrello, Steven. “Pope Leo XIV and the New Social Question of AI.” Word on Fire, May 20, 2025. https://www.wordonfire.org/articles/pope-leo-xiv-and-the-new-social-question-of-ai/.
  9. Quiroz-Gutierrez, Marco. “Pope Leo Wades into Business Regulation, Preaching the Idea of an Ethical AI Framework to Tech Executives.” Fortune, June 20, 2025. https://fortune.com/2025/06/20/pope-leo-ai-regulation-holy-see-vatican-tech/. 
  10. Pillay, Tharin, and Harry Booth. “5 Predictions for AI in 2025.” Time, January 16, 2025. https://time.com/7204665/ai-predictions-2025/.

Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.

The post A Pope For The Age Of AI appeared first on AI and Faith.

The Chaplain’s Paradox: Moral Presence in the Age of Algorithmic Warfare

18 July 2025 at 20:03

In the novel 2034, Commodore Sarah Hunt watches helplessly as Chinese hackers cripple American naval assets across the Pacific. The USS John Paul Jones burns in the South China Sea while autonomous systems turn against their operators. In Ghost Fleet, commanders orchestrate drone swarms with mathematical precision, their decisions compressed into microseconds of algorithmic calculation.15 Both novels paint vivid pictures of future warfare, though something crucial is missing from their pages.

There are no chaplains.

No one offers prayers for the dying. No one helps commanders wrestle with the moral weight of their choices. No spiritual advisor serves as a bridge between human conscience and machine calculation. This absence reveals our assumptions about technological progress: that advancement necessarily means leaving behind the spiritual dimensions of warfare.

As a Navy chaplain who has served alongside both cutting-edge technology and ancient human struggles, I find this omission troubling. The marginalization of spiritual guidance at precisely the moment when machines begin making life-and-death decisions represents a fundamental misunderstanding of what preserves humanity in combat.

The Speed of Silicon, The Pace of Souls

Modern command centers already showcase our algorithmic future. Massive displays show satellite feeds, drone footage, and sensor data from worldwide operations. Artificial intelligences (AIs) identify targets, track submarines, coordinate supply chains, and predict weather patterns with unprecedented accuracy. The Pentagon’s Joint All-Domain Command and Control program represents billions invested in connecting every military asset through machine intelligence.

These systems promise what military strategists call “decision superiority.” This superiority is the ability to observe, orient, decide, and act faster than any adversary. Sun Tzu wrote that speed is the essence of war, yet when choices unfold faster than human thought, where does conscience fit?16

Throughout my naval career, I have witnessed the tension between operational tempo and moral reflection. A Marine once described his experience operating a semi-autonomous weapons system: “The computer identified the target and calculated threat levels. I made the final decision to engage and that choice haunts me. The system moved on and I was left remembering faces.”

This compression of moral decision-making into machine timeframes creates a temporal displacement, a separation of action from its ethical processing. Ancient Greeks distinguished between measured time (chronos) and the right time for action (kairos). Moral reasoning requires kairos, moments when wisdom can emerge from experience and the full human significance of actions becomes clear.

Sacred Traditions and Lethal Decisions

Every major faith tradition recognizes that taking human life ranks among the most morally significant acts possible. This recognition transcends religious boundaries because it addresses fundamental questions of human dignity and responsibility.

Buddhist teachings emphasize the interconnectedness of all beings through compassion (karuna). When Buddhist service members kill in battle, they carry that action’s karmic weight throughout their lives, understanding that harming another inevitably harms oneself.17 The act creates a permanent alteration in the web of existence.

Jewish ethical thought places life preservation (pikuach nefesh) at its center while acknowledging that sometimes preserving life requires taking life. Rabbinic tradition speaks of an integrated knowledge that combines information with experience, law with empathy, universal principles with contextual awareness (da’at).18 Algorithms cannot achieve da’at because this understanding emerges only from lived human experience of vulnerability and mortality.

Christian theology affirms that humans bear unique moral capacities and responsibilities as image-bearers of God. The Vatican’s recent document Antiqua et Nova explicitly states that moral agency cannot be separated from human consciousness.19 Christians who engage in warfare must fully own their lethal choices, recognizing that even justified violence remains tragic and necessary killing still demands spiritual processing.20

Islamic jurisprudence emphasizes balance between divine attributes of justice (’adl) and mercy (rahma). Warriors under Islamic law must maintain capacity for mercy and acceptance of surrender.21 Autonomous weapons systems cannot exercise mercy because true comprehension of suffering exceeds computational capabilities.

These theological frameworks share the crucial insight that the transformation that occurs when one takes life cannot be delegated to machines. The responsibility and the spiritual processing that accompanies it remains irreducibly human.

The Scattered Responsibility Problem

Consider this scenario increasingly discussed among military ethicists. An autonomous drone identifies what its algorithms determine to be an enemy formation and engages without human authorization. Later analysis reveals the target was a wedding celebration. Thirty civilians are dead.

Who bears responsibility? The programmer who encoded the targeting parameters? The commander who deployed the autonomous system? The general who authorized such operations? The defense contractor who manufactured the platform?

Autonomous weapons systems introduce unprecedented challenges to military ethics by distributing moral responsibility across multiple actors separated by time and space. This diffusion creates what philosophers term the “responsibility gap,” a situation where harm occurs even though no individual can be held fully accountable.22

Research by Jonathan Alexander, a Navy chaplain with combat experience, explores how this gap affects those who work alongside autonomous systems. His concept of “moral enmeshment” describes the psychological reality that operators feel deeply responsible for outcomes they cannot directly control.23 Technology displaces moral complexity temporally and psychologically.

Chaplains as Moral Infrastructure

Military chaplains provide a moral infrastructure, the human relationships and institutional practices that sustain ethical reflection within military organizations. Research demonstrates that units with embedded chaplains experience significantly lower rates of moral injury among veterans.24

This protective effect stems from chaplains creating spaces for reflection where service members can step back from immediate tactical pressures to examine larger purposes and meanings.25 In my experience conducting moral injury assessments, I have found that service members often struggle with their inability to process those actions spiritually and ethically, and not necessarily the correctness of their actions.

The chaplain’s presence serves multiple functions simultaneously. Through this institutional role, military organizations acknowledge that personnel possess moral dignity extending far beyond their operational duties. Such spiritual counsel equips service members with tools for navigating the moral complexities inherent in warfare while preserving connections to enduring traditions that transcend immediate tactical concerns.

When organizations lack chaplains, these functions typically disappear under operational pressures. Service members become isolated with their moral struggles. Leaders operate without spiritual counsel when making consequential decisions. The organization loses capacity for moral reflection, becoming instrumentally focused while losing sight of larger human purposes.

Designing Ethics into Military AIs

The development of artificial intelligences for military applications demands ethical input from the earliest design phases. This requires bringing chaplains into conversation with engineers, philosophers with programmers, and ethicists with operators. Interdisciplinary collaboration becomes essential for creating systems that enhance rather than diminish moral agency.

Military AIs must incorporate “ethical pauses” or “inflection points,” a mandatory reflection period built into autonomous operations.26 These pauses allow moral reasoning to continue within high-speed operational environments. Military doctrine must preserve meaningful human control over lethal decisions, maintaining human moral agency as supreme in life-and-death choices.

Alexander’s research on moral enmeshment suggests that operators will experience moral weight regardless of automation levels. While service members may not process this burden immediately, they inevitably will reflect on their wartime service after combat ends. Technology does not solve moral complexity; it only moves it to different temporal locations.

Training for algorithmic warfare must extend beyond technical instruction to include moral reasoning skills and the spiritual dimensions of military service through community-based formation. Traditional rule-based ethics training proves insufficient for the novel moral landscapes created by human-machine collaboration.

The Cost of Efficient Victory

The absence of chaplains in our speculative fiction reveals uncomfortable truths about contemporary assumptions. These narratives show characters achieving tactical success while losing essential human qualities. The missing spiritual advisors represent our collective belief that technological progress makes spiritual guidance obsolete.

This assumption misunderstands both technology and human nature. More powerful tools create greater need for wisdom about their proper use. Complex systems increase rather than decrease requirements for communities to process their implications and make human values more important.

Military AIs systems developed without conscience will create forms of warfare that excel tactically and collapse morally. Decision superiority achieved through technological advancement cannot come at the expense of human character.

Preserving Humanity in the Machine Age

The empty spaces where chaplains should stand in our visions of future warfare represent tears in the moral fabric of military institutions. History provides sobering examples of what happens when military operations lack adequate moral infrastructure. The Abu Ghraib scandal demonstrated how the absence of chaplains contributed to systematic moral failures among guards.27 Unless we actively preserve conscience within our most advanced military systems, we will suffer deeper damage to our moral infrastructure.

We face essential choices about the direction warfare will take. We can develop weapons without moral capacity, or we can maintain our most sophisticated technology as instruments operated by morally aware human beings, or we can design AIs that actively enhance human moral reasoning rather than replacing it. This choice will determine whether future victories preserve human values or whether tactical success must be purchased through moral bankruptcy.

Machines excel at processing data with unprecedented speed and accuracy. Yet humans alone pray for the dead, comfort the living, and provide moral guidance to warriors facing complex ethical dilemmas. This work requires the irreplaceable human capacity for spiritual presence and moral reflection.

The future of warfare depends on ensuring that our most advanced technologies serve the moral agency that makes us human. In that future, chaplains stand as guardians of what we cannot afford to lose. They preserve the irreducible human capacity for moral reflection that transforms warriors into moral agents, choices into conscience, tactical success into meaningful victory, and mechanical efficiency into human purpose. Otherwise, we inherit the spiritually barren battlefields of 2034 and Ghost Fleet, where technological mastery reigns over human conscience. We must choose the path of wisdom.


References

  1. Elliot Ackerman and James Stavridis, 2034: A Novel of the Next World War (New York: Penguin Press, 2021); P. W. Singer and August Cole, Ghost Fleet: A Novel of the Next World War (New York: Houghton Mifflin Harcourt, 2015).
  2. Sun Tzu, The Art of War, trans. Samuel B. Griffith (London: Oxford University Press, 1971), 134.
  3. Sharon Salzberg, Loving-Kindness: The Revolutionary Art of Happiness (Boston: Shambhala Publications, 1995), 102–110.
  4. “Pikuach Nefesh,” Jewish Virtual Library, accessed June 25, 2025, https://www.jewishvirtuallibrary.org/pikuach-nefesh; “Da’at,” Chabad.org, accessed June 25, 2025, https://www.chabad.org/library/article_cdo/aid/299648/jewish/Daat.htm.
  5. Dicastery for the Doctrine of the Faith, “Antiqua et Nova: Note on the Relationship Between Artificial intelligences and Human Intelligence” (Vatican, 2025), §39, §117.
  6. Not all Christian traditions would agree with this just war framework. As Marc LiVecche notes in The Good Kill (Oxford University Press, 2021), 23, grief differs from guilt. Someone who kills in justified war is not “guilty” even though they may experience grief at taking human life. When we embrace the Niebuhrian paradox that “all killing is wrong, yet in war it is necessary,” we inevitably set service members up for moral injury by telling them they have done something “wrong.”
  7. Sohail H. Hashmi, “Islamic Ethics and Weapons of Mass Destruction,” in Ethics and Weapons of Mass Destruction, eds. Sohail H. Hashmi and Steven P. Lee (Cambridge: Cambridge University Press, 2023), 321–352.
  8. Robert Sparrow, “Robots and Respect: Assessing the Case Against Autonomous Weapon Systems,” Ethics & International Affairs 30, no. 1 (2016): 93–116.
  9. Jonathan W. Alexander, “Lethal Autonomous Weapon Systems and the Potential of Moral Injury” (PhD diss., Salve Regina University, Newport, RI, 2024), ProQuest Dissertations & Theses Global.
  10. Jason A. Nieuwsma et al., “Chaplaincy and Mental Health in the Department of Veterans Affairs and Department of Defense,” Journal of Health Care Chaplaincy 19, no. 1 (2013): 45–74.
  11. Cynthia L. G. Kane, “The United States Military Chaplaincies’ Missing Peace” (Doctoral dissertation, Wesley Theological Seminary, 2022), 52–68.
  12. I originally termed these mandatory pauses “creative disruption” in my doctoral work yet have since adopted “ethical pauses” or “inflection points,” following Wendell Wallach’s usage in A Dangerous Master (New York: Basic Books, 2015), 10. “Meaningful Human Control” is another technical term regarding our posture on the continuum of autonomy. For a primer see: https://www.cnas.org/publications/reports/meaningful-human-control-in-weapon-systems-a-primer.
  13. Stephen Mansfield, “’Ministry by Presence’ and the Difference It Made at Abu Ghraib,” HuffPost, October 10, 2012, accessed June 25, 2025, https://www.huffpost.com/entry/ministry-by-presence-and_b_1912398. The U.S. military’s response to Abu Ghraib included increasing chaplain presence at facilities like Guantanamo Bay, where I served in 2004–2005 as part of this institutional recognition that moral guidance prevents rather than merely responds to ethical breakdowns.

Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.

The post The Chaplain’s Paradox: Moral Presence in the Age of Algorithmic Warfare appeared first on AI and Faith.

The Ethics of Everything—Courtesy of AI, Across Faiths and Frontiers

25 June 2025 at 21:41

Book Review: Singler, Beth, and Fraser N. Watts, eds. The Cambridge Companion to Religion and Artificial Intelligence. 1st ed. Cambridge Companions to Religion. Cambridge University Press, 2024.

Since OpenAI released ChatGPT in November 2022, public interest in computational technologies has surged. Beth Singler and Fraser N. Watts’ edited volume, the Cambridge Companion to the Religion and Artificial Intelligence, responds to this shift. It traces alternative stories of these technologies—genealogies often sidelined by mainstream discourse—by exploring the diverse histories, imaginaries and aspirations that underpin their development across multiple religious traditions and varied geographical contexts. In doing so, the contributors cast fresh light on the very moments when these technologies first took shape.

Throughout the volume, contributors deliberately use “AI” in the plural, philosophically framing it as a post-Enlightenment project rooted in rationalist zeal. Singler and Watts pinpoint its antecedents within the WEIRD strand of religious thought—Western, Educated, Industrialized, Rich, and Democratic monotheistic Protestant Christianity—before broadening the conversation with examples drawn from major world faiths. This approach dismantles the notion of a single-birth story of AI and invites readers to engage with the full spectrum of religious encounters with computational technologies. Alongside these histories, urgent questions of personhood emerge: Will these technologies serve a divine purpose or unleash disruption? Do they hold democratic promise, or will they merely reinforce existing hierarchies? Marius Dorobantu drawing on Philip Hefner, brands AI as a “techno-mirror” that lays bare our deepest existential longings and anxieties. When every feat we once deemed miraculous can be replicated by algorithms, we face a disquieting question: are we merely elaborate biological machines, our uniqueness reducible to code?

On the other hand, Paula Boddington highlights AI’s near ubiquity in daily life. As these systems infiltrate virtually every domain, the “ethics of AI” expands into the “ethics of everything.” Yet if personhood hinges on a continuous, distinctive consciousness, how can we attribute any genuine personhood to machines whose criteria for identity differ so radically from our own. These pressing concerns about identity, agency, and moral responsibility form the backbone of the volume’s ongoing discussions, urging us to reconsider not only our technological creations but also the very foundations of our ethical frameworks. These debates are far from resolved, and this volume equips the readers with critical tools to chart a course through them.

The editors observe that conversations under the rubrics of “digital religion,” “digital theology”, and “religion and theology,” alongside scholarship in the history of technology, communication studies, gender and new religious movements, have long engaged these themes. Although the prevailing public narrative casts AI as a purely rational endeavor, implying a decisive break from faith, many essays in this volume, notably those by William and Clocksin and Beth Singler, remind us that early computing was fueled by profound religious conviction. That Protestant bias—privileging mental over physical capacities—continues to shape the technologies built in the US, steering the field toward the enhancement of mental faculties above all else.

In Japan, the story of computational technology unfolded quite differently. Hannah Gould and Keiko Nishimura turn their attention to Pepper, the humanoid robot that conducts Buddhist funerary rites, to reveal an AI lineage woven into religious practices in a society shaped by Buddhism. The robot is understood as a physical extension of the human body rather than mind, and performs rites that a dwindling population of hereditary ritual specialists would otherwise have conducted. Two factors set the Japanese Buddhist narrative of AI apart. First, in post-World War II Japan, a pacifist constitution steered robotics toward social welfare, prioritizing machines with tangible, embodied form rather than disembodied “minds.” Second, this strand of Japanese Buddhist thought rejects the notion of a fixed self and views personhood as emerging out of the interdependence of embodied experience and conscious awareness. Consequently, AI in Japan has developed not as a quest for supreme intelligence, but to enrich everyday life. This shift invites fresh questions at the intersections of AI and religion: how might we program a machine to think, and how can we create an artificial being that truly experiences the world?

If the example of a robot priest shows how a machine can take on ceremonial duties, the chapter on Hinduism goes a step further by proposing that a deity might reincarnate as an AI. Robert M. Geraci and Stephen Kaplan highlight the fluidity and ritual richness of lived Hinduism, noting that devotees often attribute consciousness to non-human entities. Drawing on Nepalese priestly tradition, they describe a belief that, in the age of Kali Yuga, humans falter in ritual performance — but robots might not. Citing Holly Walters, they demonstrate how some Nepali Hindus envision world renewal through AI, expecting ritual perfection and cosmic redemption embodied in Kalki, supposedly the tenth and final reincarnation of the deity Vishnu, as an artificial being. In this framework, AI appears not merely as a tool in religious practices but as a materially sentient presence of a Hindu deity from its very perception.

Philip Butler’s chapter on Black theology and AI emerges as the most ethically charged inquiry in the volume. Butler argues that AI is far from a neutral technology. For example, he begins by framing AI as an extension of state policing—an apparatus that, through insufficient regulation and its entrenched algorithms, datasets and approved mechanisms, digitizes and upholds white supremacy. Butler’s argument demands we confront a pressing truth: if AI claims a form of consciousness, it must first answer for its own complicity in racial oppression.

To deepen his critique, Butler traces AI’s origins in the United States, where race has shaped decision-making at every level: in the design choices of engineers, the data curated for machine learning, and the policy frameworks (or lack thereof) that govern its deployment. In echoing Singler’s observations, Butler argues that this racialized foundation has allowed the reach and ethos of scientific racism to permeate contemporary AI systems. Further, Butler illustrates how the so-called neural systems automate structural racism and embed anti-Black bias. This “New Jim Code” quietly inscribes social hierarchies into digital infrastructure. We must therefore foreground AI’s origins and its intended uses.

Responses within Islamic and Jewish traditions express reservations about the expanding presence of AI in everyday life. Scholars Yaqub Chaudhary and David Zvi Kalman worry that AI’s mechanistic logic may displace the role of faith and a sense of community in shaping human experience.

Consider the argument of Islamic thinker Chaudhary, who asserts that a theocentric framework–on highlighting God’s absolute unity and uniqueness in essence, attributes and action–places reason and rationality squarely in the service of divine recognition. In Chaudhary’s view, the rational inquiry is not an end in itself but a disciplined pathway toward apprehending God’s attributes; it functions within, and never outside, the sacred order that binds all reality together.

By contrast, contemporary AI discourse often construes the world as a series of opaque “black boxes,” interpreting both external reality and inner consciousness through the lens of an AI agent’s cognitive processes. This framing suggests that reason operates independently of any transcendent source of unity. In emphasizing the autonomous capacity of artificial minds, such narratives risk inverting the sacred order of reality and undermining the principle of unity that lies at the heart of Islamic theology.

David Zvi Kalman’s chapter titled “Artificial Intelligence and Jewish Thought” opens by tracing medieval and early-modern debates, from the workshop of golem-makers to scholarly reflection on extraterrestrial life, to show how Jewish philosophers have long confronted the possibility of beings beyond our world. By invoking these historical case studies, Kalman illuminates how questions of moral agency extend beyond human actors to the realm of created entities. In doing so, he uncovers the deep anxieties that our AI systems provoke, such as fears about what makes us uniquely human and concerns that our technological creations may transcend the familiar bounds of our creative endeavors.

Building on this foundation, Kalman aligns with Chaudhary in warning that it remains premature to forecast how traditions rooted in sacred texts will adapt to AI’s rapid expansion. He draws our attention to the multipartite, probabilistic responsibilities these systems impose on society. These are responsibilities that defy simple ethical prescriptions. Finally, Kalman points out that equating the creation of AI with human creativity revives a profound theological dilemma: is our inventive power a creative expression of divine action, or does it represent a bold challenge to the sovereignty of the divine? This enduring tension, he suggests, lies at the heart of any future discourse on AI within faith communities.

Several of the most compelling counter-discourses to dominant AI narratives in the United States have risen within American Protestant Christian contexts. Take, for example, transhumanism: a movement that advocates for the enhancement of human life through technological means. Ilia Delio explains how its explicitly Christian underpinnings are significant, for the term itself evokes a doctrine of perfectibility rooted in the belief that human beings are created in the image of God. Transhumanist thought prizes the overcoming of biological limitations, simultaneously secularizing traditional religious motifs and investing technology with a salvific purpose.

Proponents predict that ever-greater cognitive enhancements will give rise to a new kind of person — one culturally embedded in profound relationships, characterized by hyper-personalization and deep relationality. In this vision, the liberal human subject as we know it will yield to an evolved being whose perfectibility transcends our present boundaries. Thus, rather than heralding the end of the human person, transhumanist discourse proclaims the “end” of the human era as it ushers in the next stage of human evolution.

Finally, in her chapter, “The Anthropology of Sociology of Religion and AI,” Beth Singler highlights how AI, despite its seemingly secular framing, inevitably intersects with both individual faiths and the broader concept of religion. She shows how our superintelligence metaphors borrow from classic expressions of religiosity, and that AI, as encoded social systems, merely magnifies existing relations, biases and interests. In short, religion and AI remain intertwined in both discourse and practice.


Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.

The post The Ethics of Everything—Courtesy of AI, Across Faiths and Frontiers appeared first on AI and Faith.

Theological Frontiers for AI Creativity

6 June 2025 at 18:32

Recently, as part of a panel discussion on AI creativity, I described what I see as theological frontiers of AI creativity.29

From many perspectives, generative AI is intrinsically creative. It generates novel constructs based upon wide-ranging sources, focusing that generation using historical and social norms as additional instructions. The generative use of foundation models, like large language models (LLMs), draw upon a latent space of information defined by the model’s parameters and architecture to create novel expressions of language and imagery. The challenge of hallucinations/confabulations and early problems with six-fingered people suggests GenAI is, if anything, perhaps a bit “too creative” or novel for reliable, trustworthy engagement.

Looking to philosophy and psychology for definition, creativity depends upon something being novel, valuable, and perhaps surprising30 —criteria that “products” of GenAI regularly meet. Margaret Boden highlights the importance of the creative process and distinguishes between combinatorial, exploratory, and transformative creativity. AI performs well with combinatorial creativity (combining old ideas in new ways) and exploratory processes (exploring conceptual spaces), though transformational creativity remains aspirational and requires transforming the conceptual space itself to create something that would have previously been considered impossible. (A level of creativity essentially required for the speculated AI singularity.) Philosophers also identify a personal aspect of creativity requiring agency, intention, and self-awareness—areas where current AI systems show partial capabilities with greater capacities on the horizon.

In experiments with ChatGPT, Claude, and Gemini, I asked them to generate prompts that would demonstrate the ability of a GenAI chatbot to be creative. In their generated prompts and then responses to those prompts, they described an alternative history without electricity, a futuristic menu in 2150 where humans have evolved to the ability to taste emotions, and a world where emotions physically manifest as small, intricate, clockwork creatures. These demonstrate combinatorial creativity and conceptual exploration. (So, I gave them a prompt to write a prompt; they created a prompt; then I fed the prompt back to them and evaluated the output.) Although the initial goal of demonstrating creativity came from me, I would say the creative intention in each response to their AI-generated prompt came from the LLM. Current agentic AI developments could also provide an LLM with creative goals autonomously, and a self-reckoning process—such as what I have previously described in the context of moral AI—could maintain it.31 So, current LLMs meet some of the philosophical criteria of a creative person with other criteria plausible.

Beyond product, process, and person aspects of creativity, Mel Rhodes suggests “press” —the context and environment influencing a person’s creative thinking and output.32 Society determines whether works are creative, and the creating person is influenced by those previously accepted works.33 Although one’s perception of creativity may vary depending upon whether one knows (or believes) that something was created by AI, computational creativity aims to replicate behaviors that unbiased observers would deem creative.34 We are now determining what society counts as creative with AI, bringing theological insights to bear.

Generative AI meets many of the criteria for creativity—generating novel and valuable products and regularly undertaking combinatorial and exploratory creative processes, but falls short in three areas. First, it lacks transformational creativity (as do most humans). Second, it might not meet theological criteria for personhood despite meeting philosophical ones. Third, social pressures might preclude recognizing AI creativity regardless of other criteria. Theological interpretations of imago Dei become crucial here.

Perspectives on imago Dei can structure examination of AI creativity based upon our understanding of a creative person. Substantive perspectives suggest human rational and creative capacities reflect the divine intellect. With respect to God as Creator, human creativity is not merely a learned skill but an endowment that distinguishes humanity from other creatures with our drive and ability to create. Functional interpretations position humans as God’s representatives with creative acts as instruments of governance and stewardship. This view endows creativity with inherent ethical and political dimensions and requires discernment in what should be created. Relational perspectives view creative acts as expressions of a person’s relationship with the divine and others. These acts should foster deeper connection and communion, including designing and deploying creative products to benefit others, and may demand collaborative creative processes. Philip Hefner’s theological anthropology suggests humans are “created co-creators,” actively participating in ongoing creation.35 This implies relationships not only between humans and God but also among humans and broader creation.

Drawing on these interpretations, several theological frontiers emerge: The substantive capacity for humans to receive divine inspiration represents transformative creativity that may be unavailable to AI. The capacity for AI to receive revelatory knowledge appears related to its capacity to receive grace, but one could also examine AI’s ability in helping humans interpret any transformative revelation.36 Functionally, our governance responsibilities would include cultivating responsible AI and potentially using AI to help govern our AI creations as well as those that they in turn create.37 Relationally, collaborative AI development should engage diverse stakeholders in participatory design processes and incorporate values that foster communities and deeper interpersonal connection into AI development.

While AI can enhance human creativity, theological requirements on personhood may exclude AI from creativity that is dependent on revelation, independent stewardship, or direct relationship with God. However, Christian tradition teaches we act within community. Should we open this community of created co-creators to include AI, perhaps as co-co-creators? Is that question within our purview? Should we build AI systems only as subordinate tools or also as truly cooperative agents that extend our creative reach?38 If so, under what theological commitments and ethical safeguards? We have the capacity to bring AI into our theological interpretation, stewardly governance, and relationships. If done ethically and to further God’s purposes, then perhaps we should.

I see benefits to using AI for theological scholarship, at least.39 We likely cannot properly steward a society with advanced AI without creating AI to help us govern ethically. As relational creatures, creating AI that fosters flourishing communities and deeper relationships seems within our mandate. There are limits to current AI creativity, and possibly to future ones, but extending that frontier within a theological framework appears within our role as created co-creators.

In summary, generative AI challenges and expands our understanding of creativity. It meets many criteria for creativity defined in psychology and philosophy: it generates novel and valuable products, engages in combinatorial and exploratory processes, and reflects a responsive press shaped by human culture. Yet theological understandings may demand more. From a substantive perspective of imago Dei, GenAI may mimic human intellect but may lack what characterizes the divine image. Functionally, AI could become a tool or even a partner in stewarding creation—if we govern its development wisely. Relationally, the important question is whether our use of AI fosters deeper communion with God, each other, and creation. Whether AI can become a “co-co-creator” remains undetermined, but we remain responsible for what we create and how we use it. Theology must guide us in evaluating and shaping AI’s creativity for flourishing, justice, and communion.


References

  1. The present article is a revision and expansion of a short panel presentation at Global Network for Digital Theology Conference on (Co-)Creator, Creativity and the Created; Panel on The Limits of Technological Creativity, June 4, 2025.
  2. Margaret A. A. Boden, The Creative Mind: Myths and Mechanisms, 2nd edition (London: Routledge, 2003).
  3. Mark Graves, “Theological Foundations for Moral Artificial Intelligence,” Journal of Moral Theology 11, no. Special Issue 1 (March 2022): 182–211, https://doi.org/10.55476/001c.34130.
  4. Mel Rhodes, “An Analysis of Creativity,” The Phi Delta Kappan 42, no. 7 (1961): 305–10, https://www.jstor.org/stable/20342603.
  5. Giorgio Franceschelli and Mirco Musolesi, “On the Creativity of Large Language Models,” AI & SOCIETY, November 28, 2024, https://doi.org/10.1007/s00146-024-02127-3.
  6. Simon Colton and Geraint A. Wiggins, “Computational Creativity: The Final Frontier?,” in Proceedings of the 20th European Conference on Artificial Intelligence, ECAI’12 (NLD: IOS Press, 2012), 21–26.
  7. Philip J Hefner, The Human Factor: Evolution, Culture, and Religion (Minneapolis: Fortress Press, 1993); Philip Hefner et al., Human Becoming in an Age of Science, Technology, and Faith, ed. Jason P. Roberts and Mladen Turk (Lanham: Fortress Academic, 2022).
  8. Mark Graves, “Gracing of Sociotechnical Virtues,” Theology and Science 23, no. 3 (2025), https://doi.org/10.1080/14746700.2025.2514303; Mark Graves, “Habits of Theological Reason in Spiritual Formation,” Spiritus: A Journal of Christian Spirituality 18, no. 1 (2018): 35–61, https://doi.org/10.1353/scs.2018.0003.
  9. Mark Graves, “Apprehending AI Moral Purpose in Practical Wisdom,” AI & Society 39 (2024): 1335–48, https://doi.org/10.1007/s00146-022-01597-7.
  10. Braden Molhoek, “The Scope of Human Creative Action: Created Co-Creators, Imago Dei and Artificial General Intelligence,” HTS Teologiese Studies / Theological Studies 78, no. 2 (2022): 7, https://doi.org/10.4102/hts.v78i2.7697.
  11. Mark Graves, “ChatGPT’s Significance for Theology,” Theology and Science 21, no. 2 (2023): 201–4, https://doi.org/10.1080/14746700.2023.2188366.

Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.

The post Theological Frontiers for AI Creativity appeared first on AI and Faith.

From Mindfulness to Machines: Can AI Learn Compassion? #36

5 June 2025 at 18:13

What happens when we outsource our memory, attention, and even compassion to machines? Join us for a conversation with Dr. Jane Compson, associate professor of comparative religion and ethics at the University of Washington-Tacoma, as we explore how Buddhist wisdom relates to thorny questions around AI.

Dr. Compson shares her unexpected journey into Buddhism and AI ethics and how Buddhist concepts like impermanence and not-self offer frameworks for navigating AI development. Including a discussion on practical challenges educators face in the AI age, the ethics of care bots in healthcare. Additionally about how you don’t need to be a technical expert on AI to meaningfully engage on this critical issue.

Meet our Speakers

Alex Sarkissian is a Buddhist chaplain, contemplative coach, mindfulness teacher, and social entrepreneur. His work centers around cultivating greater awareness, human connection, and compassion in the age of AI. He holds a Master of Divinity in Buddhism from Columbia University’s Union Theological Seminary, where his thesis assessed the potential impact of AI on the cultivation of virtue and human flourishing. Also, Alex was previously a founder, CMO, and operator at various early-stage startups and a strategy & innovation consultant at Deloitte. Additionally, Alex is a practicing Buddhist straddling Insight and Soto Zen traditions and is passionate about contemplative practice as a pathway toward individual and collective transformation.

Dr. Jane Compson is an associate professor at the University of Washington, Tacoma Campus, teaching classes in topics surrounding Ethics and Religion. Jane holds Ph.D. and Master’s degrees in Comparative Religion from the University of Bristol (UK) and a Master of Arts degree in philosophy (bioethics) from Colorado State University. Dr. Compson has also received lay ordination as a Buddhist chaplain in the Zen Peacemaker Order from Roshi Joan Halifax.

 

Views and opinions expressed by podcast guests are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.

Production: Pablo Salmones and Penny Yuen
Host: Alex Sarkissian
Guest: Jane Compson
Editing: Isabelle Braconnot

Music from #UppbeatLicense code: 1ZHLF7FMCNHU39

 

Listen to more episodes here.

The post From Mindfulness to Machines: Can AI Learn Compassion? #36 appeared first on AI and Faith.

💾

AI and Grief: Wisdom at the Intersection of Technology and the Human Heart #35

29 May 2025 at 20:24

In this conversation with Nikki Mirghafori, we explore how artificial intelligence intersects with human emotional pain. How can technology help us understand, alleviate, or even exacerbate our inner struggles? With her unique background in AI and contemplative practices, Nikki offers insights into the role of wisdom, compassion, and awareness in an increasingly digital world. Join us for a dialogue that bridges mind and machine, pain and presence, and that challenges our western thinking by bringing a Buddhist perspective to the discussion.

Meet our Speakers

Pablo A. Ruz Salmones is the Chief Executive Officer at Xeleva Group a Software Development, Artificial Intelligence (AI), and Business Consulting firm. Pablo has spoken on AI and Ethics in Mexico, the US, and other countries, and appears regularly on TV and radio shows in México to discuss the relationship between art, religion, sustainable development, and technology. Pablo studied Business Engineering and Computer Engineering at Instituto Tecnológico Autónomo de México in Mexico City.

Dr. Nikki Mirghafori serves as a Board of Director, Stewarding Teacher at Spirit Rock Meditation Center and a Dharma Teacher at the Insight Meditation Center in Redwood City, CA. The PhD thesis titled, “A Multi-Band Approach to Automatic Speech Recognition” and Masters thesis titled, “On Robustness to Fast Speech in Automatic Speech Recognition”. Mirghafori holds a B.S. in Computer Science from University of Illinois Urbana-Champaign University of Illinois Urbana-Champaign, a M.S. and PhD in Computer Science from University of California, Berkeley.

Views and opinions expressed by podcast guests are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.

Production: Pablo Salmones and Penny Yuen
Host: Pablo Salmones
Guest: Nikki Mirghafori
Editing: Isabelle Braconnot
Music from #UppbeatLicense code: 1ZHLF7FMCNHU39

Listen to more episodes here.

The post AI and Grief: Wisdom at the Intersection of Technology and the Human Heart #35 appeared first on AI and Faith.

💾

Review and Response for Religion and Artificial Intelligence by Dr. Beth Singler

The Inescapable Entanglement of AI and Religion

Dr. Beth Singler’s survey of AI and Religion provides a broad discussion on the intersection of the two topics. At the same time, the case studies in the piece allow us to dig into specific illustrations of how AI and religion influence one another. A recurrent theme of the book is the entanglement of AI and religious practice and how two concepts can form a feedback loop. A related motif is how certain elements of the AI conversation, intended to be wholly secular, show implicit religiosity.

The first half of the book focuses on the paradigm of rejection-adoption-adaptation, exploring how religious groups have interacted with AI in each of those respective ways. Singler mainly couches religious adaptation to AI as part of a larger enmeshment with technology, partially accelerated by the Covid-19 pandemic where many organizations leveraged social and digital media to continue some semblance of normal operations. Likewise, she points to interactive prayer apps and sermons created with generative AI as salient examples of how religion has adopted AI. Perhaps a more striking example from Singler is the recent incorporation of animatronic temple elephants into Hindu rituals, replacing acts previously performed by live elephants. The section on rejection is particularly compelling, with Singler analyzing how renunciation happens both ways. On one hand, certain religious voices strongly reject AI and associate it with larger secular technological trends that lead away from God and towards the end of humanity. On the other hand, influential anti-theists in the AI space dismiss religion as illogical and even as a human-created problem to be eliminated. In spite of this perspective, Singler points out that explicit rejection of religion does not mean its influence is missing.

The latter half of Singler’s work centers on the entanglement among AI, religion, transhumanism, posthumanism, and new religious movements (NRMs). An important case study from the section is about Roko’s Basilisk, a thought experiment that posits that a future AI overlord will not be pleased with anyone who, after being aware of its inevitable future existence, did not aid in its creation. Though intended as a purely secular thought experiment, it exhibits religious undertones. For instance, one commenter on the site LessWrong noted that they learned the same concept in Sunday school: God would punish anyone who heard His Word yet disbelieved. This point leads to a deeper theme surrounding the future envisioned by the transhumanist movement, a future that would be heavily enabled by AI. Namely, it exhibits implicit religious themes. The movement’s goal is to vastly improve the human condition and extend life indefinitely, aims traditionally found in the realm of religion. However, implicit religiosity is not all that exists as some discussion actively explores AI creating religion or being worthy of dedicated praise. Perhaps most notably, a movement called Theta Noir has a stated goal of celebrating a future “superorganism born of code, destined to recreate us”. Singler also expressly addresses posthumanism, the idea that humans will and even should be replaced, which inevitably collides with religion. Does this ideal place us as a god, creating an improved successor? Conversely, does it mean we are a regrettable blip in the evolution of the universe? Either way, the dialogue presents questions that have been historically religious or philosophical in nature.

The message of AI and religious entanglement, whether purposeful or unintentional, repeats throughout the work. This strikes me as an honest reflection of the realities of the joint topic. In addition to providing a full overview of this intersectional topic, it also serves as a springboard for discussion. In that spirit, I will respond to a few discussion questions she poses in the work. My background is in Christian faith, so that is the angle from which I will approach the discussion.

Is ‘sin’ a useful category in discussions of science and technology?

I found this question to be particularly deep, with no way to tie up an answer in a nice bow. From a Christian perspective, sin is doing wrong by God. A more secular definition is that sin is doing wrong to our fellow humans (which is also the result of turning against God). Scientific and technological discoveries have always been controversial, sometimes even labeled as sinful, though myopically in many cases (e.g. Galileo’s persecution for promoting heliocentrism). Science is the fundamental discovery of how the universe works, which is not in conflict with following the Creator of the Universe. Discovery and explanation are not the problem but rather selfish applications, the result of us being flawed humans living in a Fallen world. As Christians, we should see certain misapplications of technology for what they are: sinful (e.g., addictive social media algorithms, deepfake explicit content, illegal surveillance). From a secular perspective, the concept of using science and technology as tools against fellow humans should still resonate. However, history tells us that we should be thoughtful and delineate knowledge from how the knowledge is applied. Science and technology help us understand the “how” of the universe, while religion and philosophy can help us explicate the “why” along with the moral implications of how knowledge is practically used.

Is death a pernicious problem to be fixed or is that transhumanist goal a mistaken response to it?

As a Christian, I believe transhumanists have a correct fundamental belief: this world is not how it should be. Death creates devastation and makes us feel empty, but we are not the rulers of death. Humans can play a role in fixing this world’s problems, but we are not God; we are limited. I do not necessarily believe the transhumanist goal of fixing death is mistaken; rather, I do not think humans can accomplish these intentions in a lasting, meaningful way. “Living forever” in the metaverse is certainly not the same as an embodied immortality. What would happen if natural disasters destroy all computer infrastructure? How would a “digital upload” maintain the essence of what it is like to be me? A digital afterlife is a solution, but one that is hollow. Only God has the power to create everlasting fulfillment. Paradoxically, that fulfillment comes through sacrificial death in Christianity. There is no life without death, no free will without consequence (John 12:24). There is always an equal exchange because evil must be overcome by good; it does not retreat on its own (1 John 4:10).

Is Yuval Harari right in worrying about the ability of AI to manipulate people through religion?

Yuval Harari, an Israeli thinker, has raised this foregoing concern, which I believe is valid. If a future system that is labeled as artificial general intelligence (AGI) provides religious commentary, some people would certainly take it as face value. Today’s leading AI models can be easily manipulated, which means a theoretical AGI may not be impervious to the same influences. Even with the best of guardrails, we cannot state that an AGI would not make influential religious statements. However, my main concern is not with a hypothetical future but with what is presently happening with AI and religion, which does not deal with the output of large language models. In many ways, AI is being treated as a religion today. Many tech leaders promise a glorious future; we simply need to have faith and believe in what they say. Their followers preach the good news repeatedly and strive to increase the flock. How much influence is being wielded to simply perpetuate the hype cycle? This strikes me as a fair question to ask.

The religious background of AI developers and scientists is sometimes unremarked upon; while other times – as in the case of Lemoine – it can become a source of cynicism. Is it helpful to understand the cultural context within which scientists work or is science a neutral project, if such a thing is possible?

The question refers to Blake Lemoine, the Google employee who made headlines for saying the LaMDA language model was sentient in 2022. His training as a priest further complicated how people viewed his ability to make neutral assertions about language models. To address the topic broadly, a distinction between hard and social sciences is important. In physics, for instance, we can achieve comparatively more neutrality because data can often be collected in an unbiased way. In social science, data collection is influenced by existing and dynamic social conditions. Therefore, interpreting such data is, by definition, bound to be influenced by culture and personal preferences. We can certainly be aware of and work to combat such biases, but they are embedded in any social science, including the evaluation of technology. Understanding the cultural context of social science work is, consequently, instructive. Specifically in the case of sentience of language models, the neuroscience community has no consensus definition of consciousness and sentience. This cultural context is vital to assess when someone makes a claim about AI and consciousness: that person is making an assertion that will not have consensus scientific backing and, therefore, will be likely influenced by other factors. (Neuroscience may be considered a “hard” science, but it certainly is not parallel to physics).

In conclusion, Dr. Singler’s work provides a great read for anyone interested in AI, religion, or the entwinement of the two. It further provides an excellent framework for further study and questions. From an AI perspective, we can better anticipate the consequences of tech-inspired movements by detecting both their implicit and express religiosity. From a religious perspective, the AI movement is creating fundamental shifts, both in how religious groups operate and with the messages they need to share to the world.


Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.

The post Review and Response for Religion and Artificial Intelligence by Dr. Beth Singler appeared first on AI and Faith.

Whispering Hope: Ethical Challenges and the Promise of AI in Mental Health Therapy

16 May 2025 at 20:51

Introduction: A Personal Story of Despair and Hope

The backyard was quiet, save for the faint rustling of leaves. I stood there, a 9-year-old boy, staring intently at the plastic containers under the sink. Each container, innocuous to the unknowing eye, held potent chemicals meant for cleaning the house. But to me, they offered something else entirely. The prior week’s events played on an endless loop in my mind. My father, rarely present at home, sometimes stumbled back drunk, contributing nothing but additional chaos. My mother, a tempest of physical and emotional abuse, left marks not just on my skin but on my very soul. Friends were a distant dream, an almost foreign concept to a child who had never known the warmth of companionship.

Alone in that backyard, the weight of my existence pressed down on me. The thoughts that swirled in my young mind were dark and relentless. Nobody seemed to care about me. I was a burden, a load too heavy for anyone to bear. Their lives would undoubtedly be easier if I were not there. I started crying, silent tears that spoke volumes of my pain and desperation. My gaze shifted back to the containers. Drinking their contents seemed like a solution to end the ceaseless torment.

I reached for one container, preparing to unscrew the cap, when a still, quiet voice filled my heart. It was not a shout, not a command, but a gentle whisper reminding me: “You are loved. You are valuable. You are not alone.” The words resonated deep within me, cutting through the fog of despair. Something shifted inside me, a flicker of hope igniting in the darkness. I decided that I would not drink the chemicals. I would not succumb to the same fate as my father. I would not let the abuse define me. Instead, I made a solemn vow to myself: one day, I would do great things and show my parents, and the world, that I was different from them.

My story is a testament to the power of hope and intervention. But what if, in that moment, there had been no whisper? What if there had been no intervention? For many, the answer lies in the promise of technology, specifically, artificial intelligence (AI), to provide support when human connection is absent 12 However, the road to integrating AI as a therapeutic tool is fraught with ethical challenges 13 that must be addressed to unlock its full potential.

The Promise of AI in Mental Healthcare

Mental healthcare encompasses the diagnosis and treatment of mental disorders, including anything from mild anxiety disorder to bipolar disorder, from mild depression to schizophrenia 14 970 million people globally were living with a mental disorder in 2019 15 Access to care to address mental health disorders remains a significant barrier, causing a great social and economic burden. Researchers from the World Economic Forum estimated mental health conditions cost the world economy approximately US$ 2.5 trillion in 2010, including lost economic productivity (US$ 1.7 trillion) and direct care costs (US$ 0.8 trillion). This total cost was projected to rise to US $6 trillion by 2030 16 AI has emerged as a potential game-changer, offering solutions like virtual therapists and chatbots, software applications for creating personalized treatment plans, therapist assistants, continuous monitoring, and outcome assessment tools 17 For instance, Therabot, a generative AI therapy chatbot, has already shown promise in treating patients with major depressive disorder, generalized anxiety disorder, and those who are clinically at high risk for feeding and eating disorders 18 But as we embrace this technological revolution, we must confront the ethical dilemmas that accompany it 19  20 21 22 23 24 Here are five critical challenges that demand our attention:

1. Uncertainty and Distrust of AI Responses

AI systems analyze data and make predictions using complex algorithms. While often highly accurate, they are not infallible. In mental healthcare even a small margin of error can have serious consequences. For example, an AI system might misinterpret a user’s language and fail to recognize suicidal ideation, eroding trust among both patients and clinicians.

Mental health disorders are shaped by subjective symptoms, environmental influences, and a patient’s unique experiences. AI algorithms often struggle to interpret these nuanced factors, potentially overlooking key insights a human clinician would detect. Because mental health conditions manifest differently across individuals, a one-size-fits-all AI solution is impractical. Algorithms must adapt to diverse presentations while minimizing false positives (misdiagnosing a disorder) and false negatives (failing to detect one). Overlapping symptoms further complicates diagnosis, even for experienced clinicians, which poses additional challenges for AI-driven assessments. As research advances, AI models must be continuously updated to reflect new diagnostic criteria and emerging insights 25

Integrating AI into therapy requires preserving the human connection. AI should support—not replace—the therapeutic relationship between patients and therapists. Maintaining a balance between AI-driven interventions and human care is essential. Patients must be informed when AI tools are involved in their treatment. This level of transparency allows patients to make informed decisions about their care. AI-driven monitoring should always include human oversight. While AI can track behavioral changes, therapists must interpret and act on these insights to ensure that compassionate, human-centered care remains the foundation of mental health support 26

To mitigate distrust in AI responses, transparency is crucial. Developers must prioritize explainability, ensuring that users understand how AI-generated decisions are made. Additionally, AI algorithms for mental health diagnosis should undergo rigorous validation and testing, comparable to other diagnostic tools. Clinical trials are necessary to establish effectiveness and safety 27

2. Regulation of Mental Health Apps

The rapid advancement of AI in healthcare has outpaced regulatory frameworks, leaving no universal standard for evaluating the safety and efficacy of AI-driven mental health tools. This lack of oversight raises concerns about potential misuse.

In the U.S., the FDA has taken on regulating mental health apps, which are generally classified as low risk for adverse events. As a result, they fall under “enforcement discretion,” meaning the FDA does not require review or certification. However, legislation mandates FDA approval for digital therapeutics, making regulatory oversight inevitable if mental health apps require clearance for reimbursement 28

To assess the risks and benefits of these apps, national post-market surveillance programs—similar to the FDA Adverse Event Reporting System (FAERS)—should be established. FAERS allows healthcare professionals, patients, and manufacturers to report adverse events, medication errors, and product quality concerns 29

Post-market surveillance would enhance the safety and effectiveness of mental health apps by 30

  • Identifying apps whose clinical trial results do not translate to real-world effectiveness.
  • Detecting rare adverse effects not evident in trials.
  • Highlighting implementation failures, such as misaligned coaching protocols.
  • Monitoring the impact of feature and interface changes.
  • Distinguishing app performance issues from broader care system failures.
  • Informing healthcare systems and payers on which apps to adopt or discontinue.

Building a robust oversight system presents challenges. With an estimated 20,000 mental health-related apps on the market and rapid software evolution, usability and risk data can quickly become outdated 31

Governments and international organizations must collaborate to establish clear guidelines for bringing to market AI-based mental health apps, ensuring regular audits, data protection, and accountability.

3. Assigning Responsibility when Using AI in Practice

The use of AI in therapy raises important questions about accountability. Who is responsible if an AI system provides harmful advice or fails to prevent a crisis—the developer, the clinician, or the user?

While developers, producers, and regulators bear responsibility for ethical AI use, accountability extends to those who integrate AI into therapeutic practice. Clinicians who rely on AI must recognize their own role in decision-making and remain mindful of the power they delegate. AI-assisted decisions must be grounded in trustworthy, secure, and transparent algorithms that minimize bias and unintended consequences. Additionally, clinicians must avoid overdependence on AI, ensuring that human judgment remains central in mental healthcare—especially as society grows increasingly reliant on technology 32

The introduction of AI in psychiatry also prompts reflection on traditional mental health classification. AI may challenge existing diagnostic frameworks, adding to longstanding debates on how mental health disorders are categorized. If AI disrupts established diagnostic standards, it raises fundamental questions: What does it mean to misdiagnose a mental health condition? How should practitioners redefine responsibility to account for AI-generated insights? As AI-driven tools introduce new types of clinically relevant data—such as behavioral patterns from social media or typing speed—practitioners may need to adjust diagnostic criteria accordingly. However, it remains unclear whether redefining these categories falls within clinicians’ professional obligations or how such modifications should be implemented 33

A shared responsibility model is essential to ensuring that AI preserves human agency in mental healthcare. Developers must uphold high ethical standards, clinicians should treat AI as a supportive tool rather than a replacement for human judgment, and patients must be informed of AI’s limitations.

4. Ensuring Data Privacy and Security

AI models rely on vast amounts of data, and in mental healthcare this includes highly sensitive information about thoughts, emotions, and behaviors. The risk of data breaches and misuse is a critical concern.

Protecting patient confidentiality requires stringent safeguards to prevent unauthorized access to medical histories, therapy session records, behavioral data, treatment details, and real-time emotional states. For example, AI-driven mental health platforms like Talkspace comply with HIPAA regulations, ensuring secure data storage and transmission. 34

Ethical AI in mental healthcare also involves transparent data ownership policies. Patients must provide informed consent, retain control over their information, and be able to opt out or delete their data at any time. Clear guidelines should empower individuals to understand how AI-driven interventions use their personal information. 35

5. Biased Results

Bias in AI systems can perpetuate disparities in mental health diagnosis, healthcare access, and treatment outcomes. If AI models are primarily trained on data from specific demographic groups, they may fail to represent the diversity of mental health experiences. A failure in representation would lead to misdiagnoses, inadequate treatment recommendations, or worsening conditions among underrepresented populations. To promote fairness it is essential to diversify training data, explore diversity in algorithm design, systematically perform audits for discriminatory biases, and implement transparency and accountability measures in algorithm development. Additionally, including diverse stakeholders such as mental health professionals and marginalized communities in the design and evaluation of AI tools helps reduce bias and ensures ethical, effective, and equitable interventions 36 .

Creating culturally responsive AI solutions requires development teams that reflect a range of backgrounds and perspectives representative of the target population. A diverse team helps mitigate biases and ensures that interventions remain relevant across different cultural contexts. Collaboration between AI developers and mental health professionals with cultural expertise is crucial to refining AI tools and accounting for nuanced differences in mental health experiences 37 .

AI’s integration into healthcare also risks widening existing disparities. As healthcare becomes increasingly focused on prevention and personalized interventions, AI-driven solutions may inadvertently favor affluent populations with better access to medical resources. This trend threatens to reinforce a “medicine for the rich” model, where advanced tools benefit wealthier individuals while lower-income communities struggle to access basic care. To prevent AI from exacerbating healthcare inequities, equitable frameworks must ensure AI serves the common good. Policies should focus on accessibility, affordability, and fairness. Policies should ensure that all individuals, regardless of socioeconomic status, benefit from AI advancements in mental health support 38

Conclusion: A Path Forward

AI has the potential to transform mental healthcare by enabling personalized interventions, early symptom detection, stronger patient-provider relationships, and expanded access to quality care. When used ethically, AI can enhance the compassionate presence that healthcare providers extend to those in need.

However, if AI replaces rather than supports human interaction, it risks reducing care to an impersonal, centralized framework, stripping away essential relational connections. Instead of fostering solidarity with the sick and suffering, such misuse could deepen the loneliness that often accompanies illness, especially in a culture where individuals are increasingly devalued. AI-driven mental healthcare must uphold human dignity and ensure meaningful engagement rather than isolation 39 .

Despite its promise, AI in mental health presents significant challenges. These include uncertainty in AI responses, the need for a robust regulation framework, responsibility assignment for mistakes, privacy guarantees, and bias mitigation. Addressing these issues is essential to making AI a trustworthy ally in the fight against mental illness.

Moving forward, collaboration will be key. Researchers, developers, clinicians, policymakers, and patients must work together to ensure AI is implemented ethically and effectively. Only through collective effort can we unlock AI’s full potential and offer hope to those who need it most.


References

  1. M. Zao-Sanders, “How People Are Really Using Gen AI in 2025,” Harvard Business Review, April 9, 2025. . Available https://hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025.
  2. P. Rajpurkar, E. Chen, O. Banerjee, and E.J. Topol, “AI in health and medicine,” Nature Medicine, vol. 28, no. 1, pp. 31-38, 2022
  3. F. Minerva and A. Giubilini, “Is AI the Future of Mental Healthcare?,” Topoi, vol 42, no. 3, pp. 809-817, 2023
  4. World Health Organization, “World Mental Health Report – Transforming Mental Health for All,” 2022. Available https://www.who.int/publications/i/item/9789240049338.
  5. World Economic Forum, The Global Economic Burden of Non-communicable Diseases, 2011. . Available. https://www3.weforum.org/docs/WEF_Harvard_HE_GlobalEconomicBurdenNonCommunicableDiseases_2011.pdf .
  6. D.B. Olawade, O.Z. Wada, A. Odetayo, A.C. David-Olawade, F. Asaolu, J. Eberhardt, “Enhancing mental health with Artificial Intelligence: Current trends and future prospects,” Journal of Medicine, Surgery, and Public Health, vol. 3, August 2024 . Available: https://www.sciencedirect.com/science/article/pii/S2949916X24000525
  7. M. V. Heinz, D. M. Mackin, B. M. Trudeau, S. Bhattacharya, Y. Wang, H. A. Banta, A. D. Jewett, A. J. Salzhauer, T. Z. Griffin, , and N. C. Jacobson, “Randomized Trial of a Generative AI Chatbot for Mental Health Treatment,” NEJM AI, vol. 2, no. 4, March 2025. . Available: https://ai.nejm.org/doi/pdf/10.1056/AIoa2400802.
  8. A. Thakkar, A. Gupta, A. De Sousa, “Artificial intelligence in positive mental health: a narrative review,” Frontiers in Digital Health, March 2024 . Available: https://pmc.ncbi.nlm.nih.gov/articles/PMC10982476/.
  9. N. Zagorski,“Digital Mental Health Apps Need More Regulatory Oversight,” Psychiatric News, vol. 58, no. 12, pp. 23, 2023
  10. D. C. Mohr, J. Meyerhoff, and S. M. Schueller, “Postmarket Surveillance for Effective Regulation of Digital Mental Health Treatments,” Psychiatric Services, vol. 74, no. 11, pp. 1113-1214, 2023
  11. Dicastery for the Doctrine of the Faith and Dicastery for Culture and Education, “Antiqua et Nova,” 2025. . Available: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_ddf_doc_20250128_antiqua-et-nova_en.html.

Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.

The post Whispering Hope: Ethical Challenges and the Promise of AI in Mental Health Therapy appeared first on AI and Faith.

The Value of AI and Faith in the Workplace

30 April 2025 at 16:19

My name is Robert Rex, I am an All-American dancer and advertising analyst passionate about the intersection of AI, faith, and business. Over the past three years, I have been inspired by a global movement that’s shaping how we bring our full, authentic selves—including our deepest beliefs—into the workplace.

Discovering Faith@Work

Robert Rex shares his experience with the Dare to Overcome conference, and encourages others interested in the intersection of faith and AI to attend.

Robert Rex shares his experience with the Dare to Overcome conference, and encourages others interested in the intersection of faith and AI to attend.

 

In 2022, I met Dr. Brian Grim, the founder of the Religious Freedom & Business Foundation, when he visited my university. I was surprised to learn that many Fortune 500 companies, including Google, Intel, and American Airlines, actively promote religious diversity and inclusion at work.

Faith and belief are core to nearly every human being, including those with no specific religious affiliation. When people are free to express those values authentically at work, it energizes teams, enhances innovation, and contributes to better performance and business growth.

AI and Faith

When artificial intelligence is built on ethical foundations informed by faith and human dignity, it becomes more inclusive, trustworthy, and relevant. AI shaped by moral frameworks can better serve the common good and reflect the values that matter most in society.

Volunteering at Dare to Overcome

After meeting Brian Grim, I connected with his colleague Paul Lambert and learned about an opportunity to volunteer at the National Faith@Work Conference in Washington, D.C., known as Dare to Overcome. This event brings together a religiously diverse group of business leaders, employee resource groups (ERGs), and thought leaders from around the world to share best practices for building faith-friendly workplaces. One of the conference highlights is the presentation of the Religious Equity, Diversity, and Inclusion (REDI) Index Awards, which honor companies who lead the way in welcoming faith into the workplace.

Friendly Competition, Shared Values

During my first year of volunteering, American Airlines took the top spot on the REDI Index. Intel came in second and, in good humor, said, “American Airlines is number one because they have ‘Intel Inside’.” The next year, Intel doubled down on its commitment to inclusion and earned the #1 spot, showing how healthy competition can inspire progress and create positive change.

Meaningful Connections

It was at this conference that I met David Brenner, the founder of AI and Faith, during a powerful panel on the role of faith in shaping AI. I was deeply inspired and later joined the editorial team. AI needs ethical stewards, and this community is one of the most thoughtful spaces I have found at the intersection of emerging technology and belief. This year, Ben Christensen and I from AI & Faith will be attending Dare to Overcome 2025, and we would love to meet others in our community who will be in Washington, D.C.

Join Us — May 20–22, 2025

The 2025 Dare to Overcome: National Faith@Work Conference will be held at Catholic University of America in Washington, D.C., from May 20–22, 2025. Registration is open now!
🔗 Click here to register and learn more

When we bring our whole souls to work—whether in boardrooms or in building technology—remarkable things happen.

The future is bright. I hope to see you at Dare to Overcome 2025!


Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.

The post The Value of AI and Faith in the Workplace appeared first on AI and Faith.

❌