The talk about artificial intelligence (AI) is everywhere lately and has been for a while. But what are people actually talking about when they use the term AI? Like any buzzword or new idea, it’s at risk of being misappropriated or used as a catch-all umbrella term sometimes applied more widely than perhaps it needs to be.
So with a slate of programming coming up this fall in our Tech Forum educational offerings all about AI and its applications to the publishing industry (you can find them and register for them here), we thought it would be a good idea to define and clarify a few AI terms so that we’re all starting with a frame of reference. An AI primer, if you will.
First of all, what is AI? Well, “in its broadest sense, [AI] is intelligence exhibited by machines, particularly computer systems.” Very broad indeed. Essentially AI is a catch-all term that describes a machine “learning”. It’s in the use of more specific terms that we can start to distinguish either what kinds of learning the computers are doing or how this learning is being applied to a tool’s output.
Terms to know
Large language models: “[A] category of foundation models trained on immense amounts of data making them capable of understanding and generating natural language and other types of content to perform a wide range of tasks.”
Neural network: “[A] type of machine learning process, called deep learning, that uses interconnected nodes or neurons in a layered structure that resembles the human brain.”
Generative AI: “[R]efers to AI that can find complex relationships in large sets of training data, called a corpus, then generalize from what they learn to create new data, including original illustrations, blog drafts, answers to questions, and more.” You might have heard of ChatGPT for text or Midjourney for photos; these tools using generative AI.
Predictive AI: “Predictive AI primarily focuses on forecasting, whether that is forecasting patterns, future trends, or events…. Instead of neural networks, predictive AI relies on more simple models to gather large amounts of data, also known as “big data,” and provide predictions based on that data.”
Multimodal models: “A multimodal model can work with different types, or modes, of data simultaneously. It can look at pictures, listen to sounds and read words…. It can combine all of this information to do things like answer questions about images.”
Prompts: The user-generated instructions that tell the AI what to produce.
Hallucinations: “[I]ncorrect or misleading results that AI models generate. These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model.”
More to learn
Bonus reading if the above primers and explainers have only piqued your curiosity: some thoughts on how to balance innovation in AI and machine learning with responsibility, a three-part series about what’s AI and what’s just a fancy algorithm, and some philosophy about what it means to be conscious and biochauvanism. It’s a new frontier and there's a lot to explore and think about with AI.
If you’re interested in AI and the publishing industry, which you likely are since you’ve made it this far, join us for the free Tech Forum sessions, Applying AI to publishing: A balanced and ethical approach and AI for enhanced discoverability and user experience in online bookstores this fall.
What did BookNet read in 2024? We’re sharing some tidbits of data about our team’s reading habits this year.