Of the numerous biblical and theological topics I could write about from the 2025 Evangelical Theological Society’s annual meeting, I find myself pondering the sessions related to the use of AI in theological higher education the most. I am not an expert on AI. Nor am I a tech whiz, as the Vanguard University IT department can attest. I groan with every iPhone software update. I am simply a teacher who wants to become a better teacher, for the sake of my students. At times, I wish I could jump into Marty’s Delorian and go back in time to “the good old days” before AI existed and continue “business as usual” in the classroom. But, like the old game of hide and seek, we must face our new reality: ready or not, here it comes. I can run from AI, but I can’t hide.
I confess that I have procrastinated writing this blog, in part, because I wanted to hone my own thinking on the topic more clearly in the hopes of making a helpful contribution to the ongoing conversation across campus. However, three months later, I am still wrestling with the same tensions and unanswered questions. So, I decided to write while I wrestle—in the hopes that it may encourage others who find themselves in a similar space. (For the record, and to alleviate any concern, my use of the em-dash in the previous sentence does not indicate AI-generated text).
As I type this blog post, the AI editor in the free version of Grammarly underlines “I wrestle,” suggesting that I change it to “I am wrestling.” I reject the suggested change. I just used “I am still wrestling” in the previous sentence, and I like the way “I wrestle” sounds better. (English colleagues, help me out here). Now, it wants me to change “wrestling” to “grappling,” not understanding that I repeated the word intentionally for rhetorical effect. I can’t even write a blog post about AI invading my writing without AI invading my writing. As I grapple with my new intrusive companion, I also wonder, “At what point is AI rewriting my sentences an incremental—even if initially infinitesimal—loss of my own voice?”
Sessions at academic conferences drawing crowds larger than 30 or 40 people are extremely rare, but when I arrived at the session entitled “The Perils and Possibilities of AI for Academics” at the Evangelical Theological Society, I saw a room bursting beyond capacity, crowded with eminent scholars in suits and ties sitting criss-cross-applesauce on the floor. Others stood outside with me in the hallway, straining to hear and watching for open spots like vultures.
Whispered conversations buzzed around me with all of the familiar questions. Do we return to all bluebooks and handwritten in-class essays? Oral exams only? Do we dive in with both feet and design our assignments around learning how to generate the best AI-prompts in our academic fields, writing bias from different angles into each prompt, and putting different AI engines in conversation and competition with one another?
Panel presentations and ensuing conversations highlighted how wide the divide remains regarding perspectives on the use of AI—particularly in the context of biblical and theological scholarship and higher education. On one end of the spectrum, a presenter recommended deploying AI bots for grading and answering student questions via LMS bot integration. This Dean from a prominent evangelical seminary silenced the room with his provocative statement: “What we consider plagiarism today will not be considered plagiarism five years from now.”
Another presenter recommended using AI as an editor for scholarly publications, asking, “How is this substantively different than hiring a copy editor?” He made a distinction between using AI for iterative purposes (like an assistant) and using AI for generative research (like an authority), recalling the time Claude hallucinated a quote from St. Augustine’s The City of God as “Exhibit A” for never treating AI as an authority.
Wondering whether such a distinction between iterative and generative AI is somewhat—well—artificial, I asked the panel, “At what point does iterative editing become generative writing that potentially changes the author’s voice and style? What if your sentence sounds smoother and your syntax tighter, but your writing also sounds more generic or less like your distinctive voice?” Robust discussion ensued, with the question remaining essentially unanswered and unanswerable.
On the other end of the spectrum, another presenter cautioned scholars against using AI for editing, raising questions of intellectual property and inadvertent premature release of unpublished research into the AI-mosphere (did I just invent a word?). The tension among panelists brought me an odd sense of relief, realizing that attitudes toward the use of AI in scholarship and teaching vary as widely among the experts on the panel as they do among my colleagues chatting in the hallway at Vanguard. We are all trying to figure out the best ways to stay current with technological advances in our academic disciplines, without compromising academic standards, ethical protocols, research integrity, or—most importantly—what it truly means to be human.
While that may sound hyperbolic, I recall a poignant moment in which one presenter encouraged scholars to use AI dialogically and conversationally, engaging with AI as we would with a real human (as opposed to a Google search substitute). In response, another presenter pushed back strongly against this approach. He urged scholars to employ great caution when referring to AI with human pronouns and interacting with AI on a conversational level, highlighting the ethical dangers of perpetuating transhuman ideas by anthropomorphizing machines.
He encouraged us to ask ourselves more than just whether we should use AI or how we should use AI in the classroom. He urged us to ask, “What does it mean to be human? Are we ‘its’? Is our value defined by efficiency or perfection?” This scholar argued in favor of returning to a substantial view of humanity, rather than a functional view of humanity. “What does it mean to know things?” he queried, closing with, “There are tools, but there is truly no artificial intelligence.”
From the deeply philosophical to the wonderfully practical—including ways to use Notebook LM to curate AI-searchable closed data sets—the panelists challenged us to think critically and carefully about the use of AI in our teaching and scholarship. As an example of a closed data set approach, Logos Bible Software’s AI-empowered “Study Assistant” allows users to create custom closed data sets within their digital libraries, searching only resources owned by the user, with links to sources and citations to verify accuracy. Logos Bible Software engineers designed their AI technology with careful constraints, such that it does not hallucinate by creating fake sources or quotations. As one of more than 200 leading seminaries and universities around the world with academic partnerships with Logos Bible Software, I was proud to represent Vanguard at the Logos breakfast for partner schools, learning how colleagues at other institutions use Logos for active learning approaches inside the classroom, as well as for both student and faculty research outside the classroom.
My conclusion at the moment, after much wrestling (and grappling), is that there is no one-size-fits-all approach to using AI in scholarship and teaching. In some ways, this topic continues to overwhelm me, like the menu at the Cheesecake Factory. I think I’ll just take the plain cheesecake and a cup of black coffee, please. Yet, I feel grateful for the blessing of wrestling and grappling in community with my Vanguard colleagues, learning from each of you as we move forward—some timidly and some tenaciously—into uncharted territory. (I truly do love the em-dash, and I resent the fact that AI loves it too). I also take comfort in the thought that AI was no surprise to God, and that we can trust the Holy Spirit to lead us into all truth with wisdom and discernment as we walk this out together.