CR 068: Bruce Holsinger on ‘Culpability’ in the Age of AI
The author and professor discusses his latest page-turning thriller.
Since its publication in July, Bruce Holsinger’s fifth novel, Culpability, has received rave reviews from The New Yorker, Kirkus, The New York Times, People, Real Simple, Publishers Weekly, Newsday, and even Oprah herself. And yet, unlike most authors, Holsinger says he didn’t grow up with dreams of one day writing fiction.
“The first extended fiction I wrote were dungeons for Dungeons & Dragons,” he says. “I was always the Dungeon Master and did these elaborate dungeons for my friends. That was the kind of writing I loved.” Once in college, he didn’t follow the fiction writing path because, he says, “I was an academic. I was writing my dissertation, seminar papers, all those kinds of things.”
Though Holsinger—who received his Ph.D. from Columbia University and is currently an English professor at the University of Virginia—is widely known as an expert in medieval literature and culture, Culpability is very much a 21st-century story. When it begins, we meet a family of five who are riding in a fully autonomous, self-driving minivan to a lacrosse tournament. Seventeen-year-old Charlie is behind the wheel. His lawyer dad, Noah, rides shotgun, typing away on his laptop. Mom Lorelei, a world-renowned AI ethics expert, is working in the backseat, with the family’s phone-absorbed tweens, Alice and Izzy, next to her. When their car hits another vehicle, instantly killing an elderly couple, it sets off a chain of events that changes the family forever. A riveting page-turner, it covers a wide range of themes—artificial intelligence, corporate greed, parent-child dynamics, tech addiction—though at its core it’s driven by one primary question: When the technology we’ve grown so dependent on goes awry, who is to blame?
Though Holsinger says the story didn’t start out specifically focused on this subject, he believes it’s something we’ll be grappling with for many years to come. “I think we’re just at the beginning of having these kinds of conversations,” he says. “If anything, I hope the novel helps us slow down a bit, or at least sit back, take a breath, and understand that the morality of these technologies is not straightforward.”
Over a Zoom call, Holsinger and I discussed his academic background, his creative process, and the piece of advice he gives to all of his writing students.
This content contains affiliate links. I am an affiliate of Bookshop.org and I will earn a commission if you click through and make a purchase.
SANDRA EBEJER: I recently came across your Wikipedia page.
BRUCE HOLSINGER: Oh no.
You’re described as “an expert on the use of parchment in medieval English manuscript production.” That’s pretty niche. How does one become an expert in this field?
BRUCE HOLSINGER: I’m an English professor at UVA. My Ph.D. was in English and comparative literature, and I worked on literature and music in the Middle Ages. That was the general area of my dissertation. So that’s been my career for the last 30, 35 years. I taught at the University of Colorado for a while before coming to UVA 20 years ago. Parchment is the subject of my last academic book. It’s called On Parchment: Animals, Archives, and the Making of Culture from Herodotus to the Digital Age. It was a look at the history, the craft, the theology, the science of parchment since the ancient world. Parchment is a word for the animal skins that were used as one of the primary writing surfaces in the Middle Ages, but it has a much longer history and there’s a lot of people still working with parchment today. So that explains that weird line in my Wikipedia page.
How did you transition to writing fiction novels?
I published my first novel in 2014, so only 11 years ago, but I had been writing fiction. I wrote a thriller when I still taught at University of Colorado. It never got published. I never got an agent for it. I tried again and I just kept doing it. Writing fiction was just a hobby for a long time. I did a dual timeline novel that also ended up not getting published. Finally, when I found an agent, she convinced me to write a novel that was set in the Middle Ages, the period that I studied and teach, and that ended up being my first novel, A Burnable Book. I really do think of those kinds of writing as two separate ways I use my brain, two separate ways I use my time. Sometimes they meet, often they don’t, and that’s okay. I like keeping those two aspects of my life separate in many ways.
You don’t write lighthearted, easy, breezy novels. Where do the ideas for your books come from?
It can be anything and everything. The idea for The Gifted School came from living in a culture of competitive, braggy parents, including myself, in Boulder and then in Charlottesville. My parents were educators, so that idea just bubbled to the surface over a number of years. The Displacements, my last novel, imagines the world’s first category 6 hurricane and what it does to South Florida and Houston. That came from my own fear and obsession about climate change, and my worry about our near future.
Culpability began with the setting. It was during the pandemic, and our family was down in the Northern Neck of Virginia, which is where most of the novel is set. We got an Airbnb on this pretty modest, kind of ramshackle cove. I was kayaking and I saw this massive compound next door, and a helicopter landed on a helipad as my son and I were kayaking. That setting and the juxtaposition of wealth really got to me and I wanted to set a novel there, but the story didn’t come to me for a couple years. Then when it finally did, it wasn’t about artificial intelligence. It was just about a family recovering after a car accident. I thought that would be a good place for them to go to recuperate after the initial shock has worn off. And so that’s what drew me in was that landscape.
So how did that morph into what the book became? Because there’s so many layers to it. Do you plot out your stories in advance?
Ha!
Okay, that’s a no. So how does the process work for you?
I am not a plotter. I often get into trouble. I’ll write myself into a hole in my plot or two parts of a story aren’t coming together, and I have to scrap something. That just happens to me a lot. With this novel, it really crystallized for me when I said, “I want to think about a family that’s going through a car accident and who caused it. And everybody in the car that day thought of themselves as somehow responsible. That could be a good family drama of the sort that I’ve written in the past.” So I was just scraping along in this idea, and then I thought, “What if they were in a car that was in self-driving mode at the moment of the accident?” Then you’ve got the who, but you’ve also got the what. But I wasn’t really thinking about artificial intelligence at all. That was in the summer of 2022 when I figured out that angle. Then in the fall of 2022 is when ChatGPT erupted into the public sphere, and everybody started talking about AI. I realized very quickly that this novel is actually about artificial intelligence and how it intersects this family’s catastrophe and its aftermath and issues of blame and guilt and responsibility. Hence the title and the theme of the novel.
Forgive all the craft questions, but I’m fascinated by this subject. So, you come to realize—after you’ve already spent however much time writing the book—that there’s a new component to it or it’s going in a different direction. Do you scrap everything you’ve done? Do you start over from the beginning?
No, to the contrary. Once I understood that was a big sub-theme of the novel and that it was resonating in all these ways with different aspects of tech and phones and AI and all these digitized ways that we go through the world, that helped me a lot. It really started streamlining it for me. It started introducing all these other elements, like Alice’s chats with Blair, her chatbot, which is a real spine of suspense in the novel. I realized that really needs to be a bigger part of things. It was as if my interest in that dimension of it just kept layering itself into different parts of the plot, to the characters, to the ways the characters relate to each other, to the world building that I’m doing in the novel. It all just started to gel. And that often happens with me. I’ve probably written more novels, and certainly more starts of novels, than I’ve published, by double, I would say. Sometimes I start to feel an idea gel, and it gets more momentum, and I get this confidence in the story and in the characters. And when that happens, that’s just a magical moment.
You have excerpts in the novel from Lorelei’s book on the ethics of artificial intelligence. Did you have to do much research around AI or any of the technology referenced in the book?
Oh yeah. I interviewed a lot of people in those spaces—AI, legal, ethical, technological. I did a ton of listening to podcasts and reading. I read a lot of detailed research and had a friend or two help me unpack it a bit. And when I came to Lorelei’s character, those excerpts from her book came to me pretty late in the drafting process. Probably by my third draft, I would say. My last four novels have all been from multiple points of view. This one is from one point of view, and it’s just first person. I just wanted to tell this story from Noah’s point of view. I wanted him to be not a deceptive or unreliable narrator, but a little bit clueless about what’s going on around him. But I didn’t want to sacrifice everything to his perspective.
In my novels, I do a lot of what I would call paratextual writing, where you get the main story and then you get little blips of texts that come in and illuminate it from some other perspective, a wider perspective, and other characters. Lorelei’s book worked like that for me. And I think I’m a pretty good mimic. That’s one of the ways I think about writing those elements of my novels is mimicry and putting on the voice of an academic philosopher. I always have fun with that. So that’s where those excerpts came from.
On the flip side, you have Alice’s conversations with her chatbot “friend,” which are written in the style of a young teen. How was it for you to write those sections?
Those were really interesting. The research I did there was on the effect of chatbots on young people. Like, there were school systems during the pandemic that had large language models attached to their online portals for student homework, and they were encouraging students to use them. I found that really shocking. I got immersed in that and looked at some legal trial transcripts to see what the transcripts of the chats looked like. Again, I was just mimicking. But I also had to get the lingo right for someone Alice’s age, so you don’t make people that age cringe, which is easy for me to do.
There’s a brilliant twist at the very end of the book—it’s one that I still think about quite often. Without giving anything away, did you know when you began the book that the twist would be there? I know you said you’re not a plotter.
Once I understood that this book was about AI and agency and autonomy and human-computer interactions, I didn’t want a predictable story about the evils of artificial intelligence or a morally straightforward, black and white, good/bad story. And so Lorelei is completely obsessed with the ethics of artificial intelligence and the spin of the book—those chatbot conversations, the autonomous vehicle—end up making the morals of the thing very gray. You can notice in the way that the chatbot, Blair, tries to edge Alice toward a more moral approach to her own culpability. That was a pretty late addition. In fact, there were only a couple of those bot conversations in the first draft. I thought it might be fun to sprinkle a couple in, and it was my wife who read the draft and said, “I want more of that.” And then my agent also said, “I need more bot”. And so that ended up being a central line of the plot.
Now that you’ve written the story and done all this research, has your perception of AI changed?
I would like to say yes, but I don’t know if I can. I’ve been very ambivalent about it the whole time. I’m ambivalent about self-driving cars, for example. You know, I was in a Waymo while I was on tour, and one of the things that I say when people ask me questions about this is, “When’s the last time you were in an Uber or a Lyft when the driver was not on their phone while driving you around? Probably never, because when they’re driving you around, their job is to get their next fare, so they’re constantly on their phone.” A Waymo is, above all, concerned for your safety, right? It doesn’t get distracted by its phone. It can get messed around with by other things, but part of me feels much safer now in that kind of experience. So that’s just one example of AI and the morally ambiguous nature of the technology.
Shifting gears a bit, who are your influences as a writer? Who do you turn to for inspiration?
I grew up reading, for some reason, a lot of Charles Dickens. By the time I’d finished college, I had read all of Dickens. He’s got all these big novels with multiple points of view. Some of them are historical, some of them are contemporary. They’re kind of all over the place. I would say that, in the marrow of my bones, is the model for how to write fiction. But then, I went to graduate school and read medieval literature. I really love the way the literature of the medieval world plays with stories within stories. Dream visions, like this great poem “Piers Plowman,” a dream within a dream. Frame narratives, like Chaucer’s Canterbury Tales, where you have this big overarching story that unites a bunch of smaller stories within it. Those are the kinds of narratives that are deeply ingrained in my sensibility. But I’m also a big reader of thrillers. I love thrillers. I’m in a big jag of reading contemporary autofiction. I’ll read literary fiction till the cows come home. I’ll just read all over the place.
You have a lot on your plate, given your full-time job at UVA. Do you consider your fiction writing a side job or hobby? And do you write every day?
I would say it’s been a side job or hobby up until the last few years. Now, suddenly, it does feel like I have two completely legit and separate careers, so that’s a little disconcerting. But yes, I write every day. Or if I don’t write, I at least open the document of the novel that I am working on, stare at it, and close my computer and get some coffee. I mean, that’s the least I will do. But I try to write every day, and often it’ll take me a while to get into a story and start to really have flow. Once I’m there, I can write a couple thousand words a day on a really good day. And if I write 500, I’m perfectly happy.
In addition to your job at UVA, you teach craft classes at WriterHouse. What’s one piece of advice that you find yourself frequently giving to your students?
My constant and consistent single most important piece of advice is once you’ve finished a novel, the best thing you can do for yourself is to start another one. A lot of people finish a novel and start to market it, try to get an agent. Don’t sit with that one manuscript and hate the world because you’re not getting agents to respond to you, or because your agent isn’t getting editors to respond. Or if you’ve published one novel, don’t be frustrated that it’s not getting marketed enough, not getting reviewed enough, not getting sold enough. It’s more than a matter of just butt in seat. It’s also a matter of realizing the writing that you’ve done has already honed a skill. Use that skill to start something completely different. Branch off in a completely different direction. It just keeps you fresh. That’s my number one piece of advice.
To learn more about Bruce Holsinger, visit his website.
To purchase Culpability, click here.
This interview has been edited for clarity and length.
You might also enjoy…








This is a brilliant premise for exploring culpability in the age of AI. Considering the complexity of modern autonomous systems, what if the true culpability resides not just in the immediate incident, but in the bias or limitations of the AI's traning data?