Common Good magazine interviewed me a couple of months ago on Logos’s use of AI for Bible study and published a revised transcript as “AI within Your Theological Library.”
If you’re interested to better understand how Logos’s theology of technology works itself out practically as we carefully apply advances in information technology in the form of large language models and generative AI to Bible study, I’d encourage you to read it.
Here’s a selection:
We want to help people take thoughtful, cautious steps in using AI primarily in places where it’s safe, which is helping people find information, learn, and then create something from what they’ve learned—rather than creating the outputs for them. We’re focused primarily on information retrieval and ideation rather than content creation.
. . .
Citing the sources for where the ideas in our search synopsis come from—and highlighting the relevant section in the book—is critical to our responsible use of AI, as this allows users to dig in and verify for themselves that these ideas weren’t made up by AI. We’re very explicit about telling users when we’ve used AI to produce results, and we make it clear that the output may not be comprehensive, accurate, or relevant. We encourage our users to use discernment and check the sources for themselves. We see AI as a way to get users pointed in the right direction with access to the most relevant information faster, so they have more time to study—or serve.
It’s worth remembering that human authors are fallible, too. Just because you found it in a book doesn’t mean it’s true. We also need to be responsible and check the sources behind human-generated content. Human authors aren’t inerrant anymore than machines are. One of the things that we try to do in general is encourage people to be like the Berean Jews in Acts 17, who questioned Paul and searched the Scriptures to see if the things he said were true. Don’t take AI’s word for it—or any human author’s, for that matter. Dig in and validate it for yourself.
Leave a Reply