I just wanted to follow up on a couple of things that came up in the Perspectives on AI conference session that I did on April 9th, 2026. It was a short session—30 minutes—so I didn’t get a chance to fully answer all of the questions, concerns, and comments that were in the chat before our time was up. There were a couple that I wanted to address, even if the folks who originally made the comments or asked the questions don’t actually ever see this.
First, in response to a question about copyright, I shared my take on the use of my books and materials in training AI. I prefaced this by noting that this was my perspective—not necessarily the view of most of my author friends or other authors I’ve heard from—but it’s how I think about it. I write books to share the information and experiences I have, and I expect them to be read and used (hopefully) to make someone’s life better, help them get a new job, pick up a new skill, or whatever else.
I don’t see a significant difference between someone walking into a library, checking out one of my books, and reading it to learn something new, and an AI chatbot ingesting that same content in order to better predict answers to questions later. The only real difference, for me, is that the library paid for my book and ChatGPT hasn’t (so far…). I do think chatbots should pay for the material they use, but otherwise, the difference in use isn’t meaningful enough for me to worry about. That’s a bit of a hot take—I know it’s not the popular opinion—but it’s how I choose to look at it. I also didn’t have time to mention that copyright as it relates to AI is still very much up in the air.
Someone in the keynote session mentioned in the chat that the AI policy linked there used the word “copyright” only once in the entire document. That’s likely because we are still unsure about the legal landscape surrounding copyright and AI, and until the courts make decisions, that uncertainty will remain. It’s difficult to write policy in that kind of environment, so people do the best they can with what they have.
Another question asked why I used ChatGPT at all in my process for writing the upcoming Special Report on AI Policy in Libraries. I explained that my process included writing a first draft, submitting that draft to ChatGPT, and asking whether there were questions that non-technical readers might have that I hadn’t answered, as well as whether there were areas that could be sharpened, expanded, or improved. I then reviewed those suggestions, selected the ones I thought were worthwhile, and incorporated them into a second draft.
The full manuscript was then sent to my editor, who had even more valuable questions and suggestions, which I mostly incorporated into the final draft. That draft is now sitting in my inbox, waiting for a final read-through before publication. The question, essentially, was why I included that additional step of asking ChatGPT for input when my editor would be providing feedback anyway.
My answer—which I didn’t have time to give—is twofold. First, the chatbot provides a different perspective and can surface ideas that may not occur to my human editor. Second, it’s quick and easy, and it doesn’t require my editor—who is undoubtedly very busy—to spend time reviewing an early draft. Instead, she receives a more polished and complete manuscript. That manuscript still required additional work from me, but it was less taxing on her than sending each chapter as I finished it. In that way, the process splits the work between machine and human and, I hope, results in a more complete and useful final product.
So there you go—that covers the questions I didn’t get to during the session itself! The recording will be available at the link above, along with the keynote panel I participated in, which turned into a pretty good conversation with folks from across the library and nonprofit world.