Categories
Web 2.0

Of Goblins, Grok, and Getting AI Right in Libraries

One of the classes I teach is a Library Support Staff Certification (LSSC) module on Technology for folks who are going for the certification. In the most recent class I taught, one that has just finished, I encountered a couple of different attitudes toward AI that I found interesting and wanted to explore in writing – so here we are.

First off, one person was uncomfortable with the use of copyrighted materials on AI training and wanted me to remove AI mentions from the assignment to find and discuss how emerging tech might be used in libraries in the future. My response is, as a published author with 5 copyrighted books to my name along with articles and contributed chapters and whatnot, that I really want people (for a wide definition of people in this case, I agree) to read and learn from what I write. It’s why I go to the trouble (it’s not for the money, trust me). I can’t see much difference in an AI ingesting my text to learn from and a library staffer checking my book out from their professional collection to learn from. Other than the couple of dollars in royalties I got from the library’s purchase of my book, I don’t see any material difference in who or what is making use of my writing in order to learn things.

That being said, I am one person with one take on the subject and there are numerous other, equally valid takes that can be had on this subject as well, so I’m definitely not speaking for all authors here! Just myself.

To be honest, while I’m less concerned about the copyright issue for myself, I am concerned about the environmental impact of all this computing power being put to the task of coming up with catchy titles for presentations… but I’m of the opinion that green energy is nearly unstoppable at this point, despite efforts being made in that direction, and we’ll eventually get the right balance of renewable energy sources and less energy-hungry chips. Again, just my personal belief, yours may differ.

So while AI is a thing that is happening in the world and, therefore, in libraries, one thing I’d like to see more of is strong AI policy around it’s use in our organizations. Most libraries seem to be holding off, but putting something in place now that at least puts boundaries around personally identifying information (PII) being uploaded to a chatbot or not using it as a substitute for a search engine without thoroughly checking sources or not editing whatever comes out of the black box thoroughly (see “white genocide in South Africa and Grok” for more info on that – https://www.nytimes.com/2025/05/17/opinion/grok-ai-musk-x-south-africa.html?unlocked_article_code=1.IU8.1ur3.ZoMkQPUgkZmz&smid=url-share for a gift article link).

Blanket prohibitions on its use (as Wikipedia endured in its early years) is not going to be useful or helpful for our patrons. They use it, we need to understand how to help them use it more effectively and SAFELY. Those patrons may find things like the Goblin Tools suite of tiny chatbot prompts that help do things like break down large “to-do” items into smaller steps or check the tone of a body of writing or help people estimate the time and work involved in doing something to be really useful and it is our job to help them find it and use it SAFELY.

I’m in the process of writing an AI policy for my classes – what isn’t allowed, what I won’t do with AI (grade or provide feedback, for example), and what might be useful ways to use AI in my particular classes (still ruminating on whether to add that bit in or not, to be honest) – to put in the syllabus so that we all start off with a common baseline of what is and isn’t appropriate for AI use in class. I strongly suggest libraries do the same – come up with some policies that will guide staff and patrons on how to best use this new technology without forbidding it entirely.

Leave a Reply

Your email address will not be published. Required fields are marked *