Generative AI News - issue #13
Exploring AI’s potential and staying current with AI literacy
December 1, 2025
Hi everyone!
If you’re new here, welcome! There has been a lot of news to keep up with recently. So lets jump right into it.
It seems all the big models have made updates and added new features:
Google’s new Gemini 3 model arrives in AI Mode and the Gemini app
Adobe introduces new AI features for faster video, photo, and design workflows
We’ll also look at interesting stories like these::
The Case for AI as Accommodation - Matthew Brophy
AI and the Future of Pedagogy - Tom Chatfield
SIFT for AI: Introduction and Pedagogy - Mike Caulfield
Google Scholar Finally Enters the AI Era - Aaron Tay
And finally:
My offerings: Dec. 5 webinar: AI’s Environmental Impact and the self-study course: AI Literacy for Library Workers.
The usual links to tips, thought-provoking articles, and the future.
Enjoy!

Foundation Models
Google’s new Gemini 3 model arrives in AI Mode and the Gemini app - Engadget, Nov. 18, 2025
”Gemini 3 Pro will debut inside of AI Mode, with availability of the new model first rolling out to AI Pro and Ultra subscribers. Google will also bring the model to AI Overviews, where it will be used to answer the most difficult questions people ask of its search engine. In the coming weeks, Google plans to roll out a new routing algorithm for both AI Mode and AI Overviews that will know when to put questions through Gemini 3 Pro. In the meantime, subscribers can try the new model inside of AI Mode by selecting “Thinking” from the dropdown menu.”
Three Years from GPT-3 to Gemini 3 - Ethan Mollick, Nov. 18, 2025
”Gemini 3 is a very good thinking and doing partner that is available to billions of people around the world. It is also a sign of many things: the fact that we have not yet seen a significant slowdown in AI’s continued development, the rise of agentic models, the need to figure out better ways to manage smart AIs, and more. It shows how far AI has come. Three years ago, we were impressed that a machine could write a poem about otters. Less than 1,000 days later, I am debating statistical methodology with an agent that built its own research environment. The era of the chatbot is turning into the era of the digital coworker.”
Start building with Gemini 3 - Google, Nov. 18, 2025
”Whether you’re an experienced developer or a vibe coder, Gemini 3 can help you bring any idea to life.” “Gemini 3 Pro unlocks the true potential of “vibe coding”, where natural language is the only syntax you need. By significantly improving complex instruction following and deep tool use, the model can translate a high-level idea into a fully interactive app with a single prompt.”
Generative UI: A rich, custom, visual interactive user experience for any prompt - Google Research, Nov. 18, 2025
”We introduce a novel implementation of generative UI, enabling AI models to create immersive experiences and interactive tools and simulations, all generated completely on the fly for any prompt. This is now rolling out in the Gemini app and Google Search, starting with AI Mode.”
Anthropic
Anthropic’s Claude Opus 4.5 is here: Cheaper AI, infinite chats, and coding skills that beat humans - Michael Nuñez, VentureBeat, Nov. 24, 2025
”Anthropic released its most capable artificial intelligence model yet on Monday, slashing prices by roughly two-thirds while claiming state-of-the-art performance on software engineering tasks — a strategic move that intensifies the AI startup’s competition with deep-pocketed rivals OpenAI and Google.”
Vibe Check: Opus 4.5 Is the Coding Model We’ve Been Waiting For - Parrot, Shipper, Klaassen, Every - Nov. 24, 2025
Claude Opus 4.5 Changes What’s Possible with Vibe Coding - The AI Daily Brief, YouTube
OpenAI
GPT-5.1: A smarter, more conversational ChatGPT - OpenAI, Nov. 12, 2025
OpenAI reboots ChatGPT experience with GPT-5.1 after mixed reviews of GPT-5 - Emilia David, VentureBeat, Nov. 12, 2025
”The 5.1 tag reflects improvements to the base model, and OpenAI considers these part of the GPT-5 family, trained on the same stack and data as its reasoning models. The biggest difference between 5.1 and 5 is its more natural and conversational tone, OpenAI CEO for Applications Fidji Simo said in a Substack post. “Based on early testing, it often surprises people with its playfulness while remaining clear and useful,” OpenAI said in its post. Instant can use adaptive reasoning to help it decide when it needs to think about its answers, especially when it comes to more complicated questions. OpenAI noted that it has improved the model’s instruction following, so that while it continues to respond quickly, it also directly addresses the user’s query.”
Introducing shopping research in ChatGPT - OpenAI News, Nov. 24, 2025
”Shopping research is built for that deeper kind of decision-making. It turns product discovery into a conversation: asking smart questions to understand what you care about, pulling accurate, up-to-date details from high-quality sources, and bringing options back to you to refine the results.”
Helping 1,000 small businesses build with AI - OpenAI News, Nov. 20, 2025.
”OpenAI’s latest AI Jam equips small businesses to grow and compete.”
A free version of ChatGPT built for teachers - OpenAI News, Nov. 19, 2025
”A secure ChatGPT workspace that supports teachers in their everyday work so they can focus on what matters most—plus admin controls for school and district leaders. Free for verified U.S. K–12 educators through June 2027.”
Introducing group chats in ChatGPT - OpenAI - Nov. 13, 2025
”Update on November 20, 2025: Early feedback from the pilot has been positive, so we’re expanding group chats to all logged-in users on ChatGPT Free, Go, Plus and Pro plans globally over the coming days. We will continue refining the experience as more people start using it.”
Microsoft
Microsoft Copilot gets 12 big updates for fall, including new AI assistant character Mico - Carl Franzen, VentureBeat, Oct. 23, 2025
”The Fall Release consolidates Copilot’s identity around twelve key capabilities—each with potential to streamline organizational knowledge work, development, or support operations. 1. Groups – Shared Copilot sessions where up to 32 participants can brainstorm, co-author, or plan simultaneously. For distributed teams, it effectively merges a meeting chat, task board, and generative workspace. Copilot maintains context, summarizes decisions, and tracks open actions.” (and 11 other updates)
Others
Meta returns to open source AI with Omnilingual ASR models that can transcribe 1,600+ languages natively - Carl Franzen, VentureBeat, Nov. 10, 2025
”At its core, Omnilingual ASR is a speech-to-text system. The models are trained to convert spoken language into written text, supporting applications like voice assistants, transcription tools, subtitles, oral archive digitization, and accessibility features for low-resource languages. … This version can transcribe languages it has never seen before—using just a few paired examples of audio and corresponding text. This lowers the barrier for adding new or endangered languages dramatically, removing the need for large corpora or retraining.”
Baidu just dropped an open-source multimodal AI that it claims beats GPT-5 and Gemini - Michael Nuñez, VentureBeat, Nov. 12, 2025
”What sets Baidu’s release apart is its efficiency: the model activates just 3 billion parameters during operation while maintaining 28 billion total parameters through a sophisticated routing architecture. According to documentation released with the model, this design allows it to match or exceed the performance of much larger competing systems on tasks involving document understanding, chart analysis, and visual reasoning while consuming significantly less computational power and memory.”
What’s new? Issue #219 - Testing Catalog, Nov. 9, 2025
”Last week brought several notable launches from smaller labs and some interesting announcements. Most notably, Kimi K2 Thinking is now available. It’s impressive that they managed to train such a high-performance model at a low cost, putting pressure on OpenAI, Google, and others to step up their own releases to maintain their performance advantage. Not only is Kimi K2 Thinking cheaper, but Moonshot, the company behind it, has a strong consumer focus with feature-rich apps, unlike DeepSeek. Their platform already includes agent capabilities with computer use, and more advanced agentic features are promised soon. This approach is quite different from last year’s DeepSeek, and Kimi has real potential to challenge OpenAI’s dominance in the market.”
Vibe-coding
Google’s Opal: A Closer Look - Daniel Nest, Why Try AI, Nov. 13, 2025
”Because these nodes are plug-and-play compatible with each other, your mini-app is transparent, easily tweakable, and less likely to break in unexpected ways. In short, it just works. Think of it like creating things with ready-made LEGO blocks vs. a 3D printer. Sure, the 3D printer technically lets you make whatever you want, but good luck learning the process and assembling something that doesn’t collapse under its own weight without prior experience, buddy.”
Is Google AI Studio the Best New Vibe Coding Tool? - Nat Eliason, Nov. 13, 2025
”I’m not normally a big proponent of browser-based vibe coding tools, but Google has done a really great job with the launch of their new AI Studio. If you’re just getting started vibe coding, this might be the best place to start.”
The Best Vibe Coding Tools as of October 2025 - Nat Eliason, Oct. 21, 2025
Images, video, voices, and music
Images
Nano Banana Pro: But Did You Catch These 10 Details? - AI Explained on YouTube, Nov. 20, 2025
”Nano Banana Pro is no incremental upgrade. From double exposures to almost (!) perfect infographics, comic strips, to pricing, benchmarks and more, I go through 10 facets of the new, model you won’t get just from reading the headlines.”
Nano Banana comics for Caulfield’s “Three moves, seven tips” - Nicole Hennig, Nov. 22, 205
I’ve been playing with the new Nano Banana Pro and using Mike Caulfield’s “three moves, seven tips” as content.
Disney+ to Allow User-Generated Content Via AI - Hollywood Reporter
”Disney CEO Bob Iger hinted at “productive conversations” with unnamed AI companies that would protect IP but add new features to the studio’s flagship streaming service.”
Voice
The AI user interface of the future = Voice - Grant Harvey, The Neuron, Nov. 19, 2025
With tips for using voice: “Voice gives you superpowers if you treat it like its own medium, not just “spoken typing.” You speak faster than you type, but you read faster than you listen. So for big, messy tasks, use your voice to talk through the problem and give the model all the context it needs, then skim the answer like you would any long-form doc.”
License Legendary Voices - Eleven Labs
(Judy Garland, Lana Turner, Alan Turing, and others)
”License AI voices and IP of history’s most iconic figures for your creative projects. Request licensing for entertainment legends, sports heroes, musical pioneers, and transformative historical figures. Our marketplace connects creators with rights holders for the most iconic IP.”
Video
From Cinema To The Noematograph - Nathan Gardels, Noema, Nov. 14, 2025
”He continues: “Rather than thinking about AI as a cheap way to replace filmmakers, to replace writers, to replace artists, think of [it] as a new kind of machine that captures something and plays back something. What is the thing that it captures and plays back? The content of thought, or subjectivity.” The ancient Greeks called the content, or object of a person’s thought, “noema,” which is why this publication bears that name. Liu thus invents the term “Noematograph” as analogous to “the cinematograph not for motion, but for thought … AI is really a subjectivity capturing machine, because by being trained on the products of human thinking, it has captured the subjectivities, the consciousnesses, that were involved in the creation of those things.”
Why Guillermo del Toro Will Eventually Use AI - AI Video School, Oct. 20, 2025
”Humans are really amazing at creating. Not all humans can employ giant crews to operate cameras and hold boom mics though. This is why AI video has captured the imagination of so many people left out of the current system. We can tell stories in ways that were unattainable. Del Toro will probably never write a prompt for Sora or Veo, and will definitely never ask ChatGPT to write a script for him. But I can easily imagine how AI will be included in the workflow of his films. It may involve adding a newly imagined detail to one of the sets that a set-designer and crew built. Re-lighting a scene with his cinematographer in post-production. Fixing the eyeline of a background extra who looked directly at the camera, almost ruining a perfect take. Changing the wardrobe on an actor in a shot that matches the original design of the costume director. A re-shoot that involves asking an actor who is in a different country to perform a scene on their iPhone that will match the lighting, wardrobe, and background of the original.”
The number one sign you’re watching an AI video - Thomas Germain, BBC, Nov. 3, 2025
”The real solution, according to digital literacy expert Mike Caulfield, is for us all to start thinking differently about what we see online. Looking for the clues AI leaves behind isn’t “durable” advice because those clues keep changing, he says. Instead, Caulfield says we have to abandon the idea that videos or images mean anything whatsoever out of context.”
Adding all the various modes to everything
Adobe introduces new AI features for faster video, photo, and design workflows - Amber Neely, Apple Insider, Oct. 28, 2025
”Firefly now supports full video and audio production workflows. Generate Soundtrack, in public beta, uses Adobe’s Firefly Audio Model to create original, licensed instrumental tracks that automatically sync with your video footage. Generate Speech, currently in public beta, transforms text into realistic voiceovers in a variety of languages. It also creates emphasis and controls tempo for life-like delivery.”
Introducing ElevenLabs Image & Video - Imogen Mulliner, Eleven Labs, Nov. 25, 2025
”Within ElevenLabs, you can now bring ideas to life in one complete creative workflow. Use leading models like Veo, Sora, Kling, Wan and Seedance to create high-quality visuals, then bring them to life with the best voices, music, and sound effects from ElevenLabs.”
Tips
7 tips to get the most out of Nano Banana Pro - Google, Nov. 20, 2025
NotebookLM’s new features: slide decks and infographics - Google and NotebookLM rolls back slide decks and infographics temporarily
NotebookLM adds Deep Research and support for more source types - Google
Add images as sources in NotebookLM - NotebookLM on X
”Whether it’s a photo of handwritten notes, a screenshot of a textbook or graphs on a web page, NotebookLM can synthesize the information and produce outputs from it.”Google Docs Gets Smarter: Audio narration, AI help, and 40 new templates - Jeremy Caplan, Nov. 7, 2025 (Tabs, listen to your writing, pageless formats, templates, and more)
6 new things you can do with AI in Google Photos - Google, Nov. 11, 2025
Pro tips on using AI for deep research - Dan Russell, Search ReSearch, Nov. 7, 2025
My offerings
AI’s Environmental Impact - Library 2.0 webinar, Dec. 5, 2025
(A repeat of the session I offered on Oct. 17)
”This webinar cuts through the confusion to help you make informed choices about sustainability in your classrooms, libraries, and communities.
We’ll examine independent estimates of AI’s energy and water use and put them in context in ways that are easy to understand.
We’ll include an introduction to how data centers work and what they are used for. We’ll clarify what we know and what’s still uncertain about AI’s carbon footprint (both in the present and in future projections).
We’ll compare individual AI use to other digital activities, and we’ll also look at global use of data centers with statistics from the International Energy Agency.”
AI Literacy for Library Workers: self-study course - Nicole Hennig
(My six-week course from Infopeople is now over, but you can still sign up for the self-study version).
No deadlines.
The course will receive updates once a year – you’ll have perpetual access.
Go at your own pace, come back and review anytime.
You’ll get a certificate of completion when you finish the course.
(See end of this issue for comments from previous participants).
Accessibility
The Case for AI as Accommodation: Treating AI as optional risks perpetuating ableism in higher ed. - Matthew Brophy
”The stories students have shared across my classes have been revelatory to me. A student with aphantasia uses AI to generate diagrams and mnemonics in organic chemistry, compensating for the fact that she cannot form mental images. Another battling anxiety builds confidence by generating practice tests with ChatGPT, rehearsing so she no longer freezes at exam time. A commuter student turns dense readings into audio so she can review them while driving. One student identifying as on the autism spectrum told me she uses AI for social insights, such as how she could contribute to a group project without sounding “pushy.” A first-generation student told me she relies on AI to figure out what she needs to do in college: How to email a professor, what office hours are for, what an annotated bibliography is. A multilingual student said he asks AI “basic” comprehension questions that he’d be embarrassed to ask someone else. A student with ADHD described getting easily overwhelmed, but ChatGPT would assure her that she could do it, and would break up her workload into small “bite-sized” tasks she could tackle one at a time.”
AI will be a Big Win for Blind Users, Not a Shield From ADA Lawsuits - Jason Taylor, UsableNet, Nov. 18, 2025
”Traditional screen readers rely solely on underlying website code, meaning they cannot perceive or interpret the visual layout that sighted users take for granted. AI changes that dynamic entirely. By analyzing both the code and the visual appearance of a webpage, AI can act much like a sighted assistant sitting beside a blind user: describing visual elements, suggesting actions, identifying likely interactive targets, and offering alternative strategies for completing tasks online.”
Future smartglasses may change the way deaf people hear - Andy Boxall, Aug. 13, 2025
”Research is being carried out into how smart glasses equipped with a camera, AI, and a data connection could help deaf people better follow real-time conversations.”
Hate Meta? Even Realities Is Making the Smart Glasses You Want - Julian Chokkattu, Wired, Nov. 12
”So what do you actually do with the G2? You can view notifications, use the translation function to see translated words of the person you’re speaking with on the display of the glasses, see navigation instructions as you walk, and pin a to-do list. … "The microphones can hear when you say “Hey Even,” and trigger the company’s Even AI assistant, which can act like your standard-fare large language model chatbot today and answer your queries.”
Hands-on: Smart glasses that finally look & feel normal – Even Realities G2 - Fernando Silva, 9to5Mac, Nov. 12, 2025
Even Realities subtitle glasses - Even Realities G2
Learn more about these glasses.
What’s happening in education
No, the Pre-AI Era Was Not That Great (opinion) - Zach Justus and Nik Janos, Inside Higher Ed, Nov. 20, 2025.
”First, it allows us to blame everything wrong with education on generative AI rather than acknowledge deep and justifiable concerns we have had for a while. The current technology serves as a convenient scapegoat for problems we may have been aware of but decided to live with. Course Hero, Chegg and other providers had industrialized academic dishonesty before ChatGPT was launched. We decided not to deal with that and, rather than face up to our past oversights, we have simply forgotten.”
Time, emotions and moral judgements: how university students position GenAI within their study - Margaret Bearman et al. Higher Education Research & Development, Nov. 11, 2025.
”Despite their centrality, students’ perspectives remain underexplored. We investigated how students position GenAI in relation to their studies, conducting focus groups with 79 students across four Australian universities. Taking a sociotechnical stance and employing reflexive thematic analysis, we identified three primary themes: (1) studying with and without GenAI; (2) mixing messages and assumptions; and (3) ‘coming from me’: self-trust and resistance to dependency. Crossing these themes were axial threads of time, emotions, and moral judgement. Our findings illuminate a complex, dynamic and uncertain landscape of relationships in which students prioritise their developing values and moral positions over institutional messaging.”
AI and the Future of Pedagogy - Dr Tom Chatfield, Sage White Paper, Nov. 23, 2025
”It critiques defensive, surveillance-based institutional responses to AI and calls for a shift toward transparent, experimental, and mastery-based assessment. It highlights the need for educators to become designers and facilitators of learning environments, and for institutions to ground their use of technology in civic and ethical purposes. Ultimately, the white paper argues for a future-focused pedagogy that prepares students to thrive in an AI-rich world by developing both technical fluency and distinctively human capacities.”
Why AI is So Hard (For Education) - Stefan Bauschard, Oct. 28, 2025
”Together, these thinkers suggest that the education system must evolve from preparing students for predictable employment toward supporting them to launch, lead, iterate, and scale—to become creators of enterprises, designers of their own careers, and collaborators with AI rather than passive recipients of instruction….In this light, the public arguments about whether students should use AI to brainstorm or write a paper seem almost trivial. They are symptoms of a system that still measures learning by compliance, not creativity; by authorship, not agency. The fundamental question is not how much AI a student uses, but how education helps them understand and shape the world that AI is creating.”
How well can Gemini 3 make a Henry James simulator? - Benjamin Breen, Nov. 18, 2025.
”How can educators adapt to and acknowledge the new reality but, at the same time, reject the idea that the only path forward is mass adoption of pre-packaged subscription services and for-profit “AI tutors”? How do we do something creative, interesting, and original with these tools rather than allow ourselves to be funneled toward the average, the expected, or the addictive.”
Gemini 3 Solves Handwriting Recognition and it’s a Bitter Lesson - Mark Humphries, Generative History - Nov. 25, 2025
”For the historical community, as we gradually become accustomed to this new reality, it will radically alter how historians, genealogists, archivists, governments, and researchers relate to our documentary past. It’s also a harbinger of the larger changes in our relationship with information that are now certain to come from scaling.”
What’s happening in libraries
Benchmarking AI Models in the Library: Critical Questions for Library Professionals and Users: How to select the best model for each job - Elizabeth Szkirpan, Choice, November 24, 2025
”It is challenging to directly compare most AI tools apples-to-apples because the underlying corpus, training process, and intended uses can vary widely. However, practical benchmarking questions can help librarians compare models to one another or be used to guide users through the reference interview in order to select the best model for the task at hand.”
Scholar Labs: An AI Powered Scholar Search - Google Scholar blog, Nov. 18, 2025
”Scholar Labs is an experimental feature and is as yet available to a limited number of users. This is a new direction for us and we plan to use the experience and feedback to improve the service. If Scholar Labs is not yet available for you, you can register to be notified when it is.”
Scholar Labs Early Review: Google Scholar Finally Enters the AI Era - Aaron Tay, Nov. 20, 2025
”If my assessment of how Scholar Labs works is correct, this is a game-changer solely due to the scale of Google Scholar’s index. Scholar’s index dwarfs OpenAlex, Semantic Scholar (used by many new “AI academic search”), and others. It includes full text from almost every major publisher (allows crawls from Google Scholar). If Scholar Labs inherits this exact index (and isn’t walled off from paywalled full-text), it should dominate in non-STEM disciplines where other databases are weaker.”
A 2025 Deep Dive of Consensus: Promises and Pitfalls in AI-Powered Academic Search - Aaron Tay, Nov. 15, 2025
”Academic librarians are increasingly being asked about AI-powered search tools that promise to revolutionize literature discovery. Consensus is one of the more prominent and earliest players in this space (alongside Elicit, Perplexity), positioning itself as a tool that can not only find relevant papers but assess the “consensus” of research on a topic.”
For those librarians who teach information literacy:
SIFT for AI: Introduction and Pedagogy - Mike Caulfield, Nov. 21, 2025
”What is happening here is not that AI is doing the thinking for students — rather, with prompts like these AI is generating preliminary maps of the information landscape. Students are then using those maps to navigate that landscape, and hopefully developing familiarity with the subject and patterns of critical reasoning (both general and grounded in the discipline) as they go while also developing helpful knowledge around how LLM tools work and how to use them effectively.”AI Mode with Gemini 3 gets past the hedging to a welcome specificity - Mike Caulfield, Nov. 20, 205
Just for fun
Anti-A.I. Hate Song - Kotoku Denjiro on Suno
A song talking back to the AI haters.
Good Morning, New Robots! - Kevin Kelly
Fun short story. “Good morning robots! This is a very special time for you because you are about to be turned on. “ON” means that you will have a body and energy.”
Nano Banana Pro is blowing my mind! - Blaine Brown on X
Fun prompt to try: “Make a 4×4 grid starting with the 1880s. In each section, I should appear styled according to that decade (clothing, hairstyle, facial hair, accessories). Use colors, background, & film style accordingly.”
Surrealistic AI video set to Dave Bruebeck’s Take Five - Dave Szauder on Instagram (turn on the sound)
How “Star Trek” Helped Make Midcentury-Modern the Signature Sci-Fi Aesthetic - Marah Eakin, Dwell
”… the (now) retro-futuristic aesthetic Star Trek helped cultivate through its use of midcentury-modern designs still pops up in 21st-century sci-fi depictions, from Blade Runner: 2049 to episodes of Black Mirror. But while some sci-fi has veered toward postapocalyptic hellscapes and steely and/or dusty outposts built in the wake of economic, social, and climate-based collapse, the Star Trek universe has always been based on idealism, and maybe even a bit of luxury.”
Thought-provoking
Google’s ‘Nested Learning’ paradigm could solve AI’s memory and continual learning problem - Ben Dickson, VentureBeat, Nov. 21, 2025
”The problem is that today’s transformer-based LLMs have no mechanism for “online” consolidation. Information in the context window never updates the model’s long-term parameters — the weights stored in its feed-forward layers. As a result, the model can’t permanently acquire new knowledge or skills from interactions; anything it learns disappears as soon as the context window rolls over. Nested Learning (NL) is designed to allow computational models to learn from data using different levels of abstraction and time-scales, much like the brain. It treats a single machine learning model not as one continuous process, but as a system of interconnected learning problems that are optimized simultaneously at different speeds.”Anthropic Cyberattack Report Sparks Controversy - The Batch newsletter, Andrew Ng
”Independent security researchers interviewed by Ars Technica , The Guardian, and others found a variety of reasons to question the report.While they agreed that AI can accelerate tasks such as log analysis and reverse engineering, they have found that AI agents are not yet capable of performing multi-step tasks without human input, and they don’t automate cyberattacks significantly more effectively than hacking tools that have been available for decades.”
AI Blackmail: Fact-Checking a Misleading Narrative - Nirit Weiss-Blatt, Nov. 21, 2025
”CBS should issue a correction to its “60 Minutes” show on Anthropic.”Six reasons to think there’s an AI bubble — and six reasons not to - Timothy B. Lee and Derek Thompson
”If you read this article, we think you’ll be prepared for just about every conversation about AI, whether you find yourself at a Bay Area gathering with accelerationists or a Thanksgiving debate with Luddite cousins. We think some of these arguments are compelling. We think others are less persuasive. So, throughout the article, we’ll explain both why each argument belongs in the discussion and why some arguments don’t prove as much as they claim. Read to the end, and you’ll see where each of us comes down on the debate.”The AI dashboard: Watch boom turn to bubble (or not) in real time - Azeem Azhar, Oct. 20, 2025
”A month ago, we released our framework for assessing whether AI is a bubble. The framework uses five key gauges which measure various industry stressors and whether they are in a safe, cautious or danger zone. These zones have been back‑tested against several previous boom‑and‑bust cycles….We are launching v1 of a live dashboard, updated in real time as new data comes in.”Hyperproductivity: The Next Stage of AI? - Steve Newman, Second Thoughts, Nov. 19, 2025
”These teams have four things in common:They are aggressively using AI to accelerate their work.
They aren’t just using off-the-shelf tools like ChatGPT or Claude Code. They’re building bespoke productivity tools customized for their personal workflows – and, of course, they’re using AI to do it.
Their focus has shifted from doing their jobs to optimizing their jobs. Each week, instead of delivering a new unit of work, they deliver a new improvement in productivity.
Their work builds on itself: they use their AI tools to improve their AI tools, and the work they’re optimizing includes the optimization process.”
Why it takes months to tell if new AI models are good - sean goedecke - Nov. 22, 2025
”Nobody knows how good a model is when it’s launched. Even the AI lab who built it are only guessing and hoping it’ll turn out to be effective for real-world use cases. Evals are mostly marketing tools. It’s hard to figure out how good the eval is, or if the model is being “taught to the test”. If you’re trying to judge models from their public evals you’re fighting against the billions of dollars of effort going into gaming the system. Vibe checks don’t test the kind of skills that are useful for real work, but testing a model by using it to do real work takes a lot of time. You can’t figure out if a brand new model is good that way. Because of all this, it’s very hard to tell if AI progress is stagnating or not. Are the models getting better? Are they any good right now? Compounding that problem, it’s hard to judge between two models that are both smarter than you (in a particular domain). If the models do keep getting better, we might expect it to feel like they’re plateauing, because once they get better than us we’ll stop seeing evidence of improvement.”Fix the News, issue #320.
”Of course, this has been going on for a long time. William Randolph Hearst coined the phrase “if it bleeds, it leads” back in the 1890s, during the days of yellow journalism. What’s different now is that we all get our news on the black devil glass, all the time, as it happens, from everywhere - which is why we are drowning in an endless stream of alarm that makes the world feel far more dangerous than it actually is. We’ve said this a hundred times before, but it’s always worth repeating: the world’s biggest news organisations aren’t reflecting the world. They are reflecting what’s dramatic, rare, and cheap enough to turn into a story. If you’re after an accurate picture of what’s getting worse, or what’s improving, you’ll need to look somewhere other than the news.”How to Confront Highbrow Misinformation - Dan Williams, Persuasion, Nov. 10, 2025
”Highbrow misinformation primarily misinforms audiences not through explicit falsehoods but through forms of communication that select, omit, frame, and contextualize information in misleading ways, signal-boosting some facts, de-emphasizing others, placing real statistics in deceptive contexts, and soliciting commentary from experts offering preferred opinions. Ruxandra Teslo calls this “Haut Bourgeois Propaganda,” which contrasts with the kind of “brute misinformation” (i.e., outright lies and fake news) often associated with lowbrow sources and alternative media.”Scientists Need a Positive Vision for AI - Bruce Schneier, Nov. 5, 2025
”There are myriad ways to leverage and reshape AI to improve peoples’ lives, distribute rather than concentrate power, and even strengthen democratic processes. Many examples have arisen from the scientific community and deserve to be celebrated.” [followed by links to examples] “While each of these applications is nascent and surely imperfect, they all demonstrate that AI can be wielded to advance the public interest. Scientists should embrace, champion, and expand on such efforts.”Writing for AIs is a good way to reach more humans - sean goedecke, Nov. 14, 2025
”I don’t write this blog to make money or to express myself in beautiful language. I write because I have specific things I want to say”…. “When engineers talk to language models about their work, I would like those models to be informed by my posts, either via web search or by inclusion in the training data. As models get better, I anticipate people using them more (for instance, via voice chat). That’s one reason why I’ve written so much this year: I want to get my foothold in the training data as early as possible, so my ideas can be better represented by language models long-term.” “I think this is a good reason to write more and to make your writing accessible (i.e. not behind a paywall). But I wouldn’t recommend changing your writing style to be more “AI-friendly”: we don’t have much reason to think that works, and if it makes your writing less appealing to humans it’s probably not a worthwhile tradeoff.”The Internet Archive should be protected not attacked - Mathew Ingram, Nov. 13, 2025
”Cases like the Hachette lawsuit reinforce something I’ve often thought, which is that if public libraries didn’t already exist, corporate interests like book publishers and movie companies and record labels would never allow them to be created. People get to read our books or watch our movies or listen to our records and we don’t get paid anything? they would shout – that’s communism! Incidentally, it’s worth noting that plenty of authors’ groups and individual creators are fans of the Internet Archive, and many didn’t support the Hachette case. But cases like this are why Congress created fair use – because without it, commercial entities will press their interests relentlessly until every book and film and song and work of art is locked up in a giant vending machine where the price keeps going up, and your purchase is deleted if you don’t agree to its terms.”Common Ground between AI 2027 & AI as Normal Technology - Kapoor et al. Asterisk Magazine, Nov. 12, 2025
”These are substantial disagreements, which have been partially hashed out here and here. Nevertheless, we’ve found that all of us have more in common than you might expect. In this essay, we’ve come together to discuss the ways in which we agree with each other on how AI progress is likely to proceed (or fail to proceed) over the next few years.”Type 2 Growth - Kevin Kelly, Oct. 27. 2025
”In this respect “degrowthers” are correct in that there are limits to bulk growth — and running out of humans may be one of them. But they don’t seem to understand that evolutionary growth, which includes the expansion of intangibles such as freedom, wisdom, and complexity, doesn’t have similar limits. We can always figure out a way to improve things, even without using more stuff — especially without using more stuff! There is no limit to betterment. We can keep growing (type 2) indefinitely.”Is it worrying that 95% of AI enterprise projects fail? - sean goedecke, Nov. 3, 2025
”The NANDA report is not as scary as it looks. The main reason is that ~95% of hard enterprise IT projects fail no matter what, so AI projects failing at that rate is nothing special. AI projects are all going to be on the hard end, because the technology is so new and there’s very little industry agreement on best practices. It’s also not clear to me that the 95% figure is trustworthy. Even taking it on its own terms, it’s mathematically closer to 92%, which doesn’t inspire confidence in the rest of the NANDA team’s interpretation. We’re forced to take it on trust, since we can’t see the underlying data - in particular, how many of those 52 interviews went into that 95% figure.”
CopyrightThe Bartz v. Anthropic Settlement: Understanding America’s Largest Copyright Settlement - Dave Hansen, Kluwer Copyright Blog, Nov. 10, 2025
The best article I’ve seen about this case. Many interesting details. Worth reading the whole thing.
”Conclusion: Nobody really won in this suit. Authors and publishers get money but no control over future AI training. Anthropic writes a massive check, but it already won on fair use for training its LLM and it gets to keep the scans it makes of legally purchased books. The AI industry gets some guidance (don’t download your books from LibGen) but still faces uncertainty about outputs and future regulation.”Setting the Record Straight: Common Crawl’s Commitment to Transparency, Fair Use, and the Public Good - Rich Skrenta, CommonCrawl, Nov. 4, 2025
It’s worth reading the whole thing.
”A recent article in The Atlantic (“The Nonprofit Doing the AI Industry’s Dirty Work,” November 4, 2025) makes several false and misleading claims about the Common Crawl Foundation, including the accusation that our organization has “lied to publishers” about our activities.This allegation is untrue. It misrepresents both how Common Crawl operates and the values that guide our work.”
Warner Music Group strikes ‘landmark’ deal with Suno; settles copyright lawsuit against AI music generator - Music Business Worldwide, Nov. 25, 2025
”Artists and songwriters, according to the companies, ‘will have full control over whether and how their names, images, likenesses, voices, and compositions are used in new AI-generated music’. In 2026, according to the press release, Suno will make ‘several changes to the platform, including launching new, more advanced and licensed models’.”WMG settles Udio lawsuit, strikes licensing deal for ‘next-generation’ AI music platform coming in 2026 - Music Business Worldwide, Nov. 19, 2025.
”Warner Music Group and AI music platform Udio have struck what they call “a landmark agreement” that resolves the companies’ copyright infringement litigation. The companies have also entered into a licensing deal for a ‘next-generation’ AI–powered music creation, listening, and discovery platform set to launch next year. The news arrived just an hour after WMG announced a new partnership with Stability AI on Wednesday (November 19), which the companies say will ‘advance the use of responsible AI in music creation’.”The Munich Mirage: Why the “GEMA vs. OpenAI” Verdict is Less About AI and More About Geography - Information Labs, Nov. 26, 2025
”This verdict was not a cosmic shifting of the legal tides. It was the predictable result of a uniquely German forum shopping practice known as “the flying jurisdiction” (fliegender Gerichtsstand)”… “This turns the German legal map into an “all-you-can-eat” buffet for plaintiffs. If you are a rights-holder looking to sue a tech giant, you don’t just file a lawsuit; you go shopping. You look for the specific court that offers the specific “product” you want.”Suno, Yout, Perplexity AI and §1201: AI Training and another piece of the DMCA - Authors Alliance, Nov. 21, 2025
”Continued AI development shouldn’t hinge on a misapplied anti-circumvention rule. If copyright owners want to challenge AI training, they should do so directly on copyright infringement grounds or through Congress. Not through a legal side door designed to regulate encryption in 1998.”The False Hope of Content Licensing at Internet Scale - Matthew Sag, ProMarket, Nov. 19, 2025
”Matthew Sag argues that content licensing deals between developers of artificial intelligence and content owners are only possible for large content owners and cannot feasibly apply to the bulk of producers and owners of content on the internet.”Getty Images v Stability AI: A landmark High Court ruling on AI, copyright, and trade marks - Andres Guadamuz, Nov. 6, 2025
”The problem here, and one that keeps eluding quite a few non-legal commentators, is that the fact that a model CAN memorise items does not mean that a model contains copies of all training data, and this is very important from a legal perspective, and a detail that Smith J clearly understood…. This is a fantastic decision that will be the subject of endless discussion for months to come. … I would like to commend the judge on an extremely careful and judicious analysis, I’ve been following this case with interest since day one, and I’ve found her rulings astute, her questions direct and to the point, and she’s been capable of removing all of the superfluous nonsense from a very important subject.”
ClimateA short summary of my argument that using ChatGPT isn’t bad for the environment - Andy Masley, Nov. 12, 2025
”In the past few months I’ve spoken to a lot of people facing objections to using chatbots, including a surprising number of people who want to buy chatbot access for their large organizations and have been shot down because of worry over the environmental impact of individual prompts. I think it’s crazy that this is still happening and want a much shorter post readers can send to people who are still misinformed about this.”Empire of AI is wildly misleading about AI water use - Andy Masley, Nov. 16, 2025
Debunking AI’s Environmental Panic | Andy Masley - Chain of Thought Podcast, Nov. 26, 2025
What you need to know about AI data centers - Epoch AI, Nov. 4, 2025
Best clear explanation I’ve seen.
”Realistically, AI hasn’t yet used enough power and water to have broad climate effects. While AI data centers collectively use 1% of total US power, this is still substantially less than air conditioning (12%) and lighting (8%).The story is similar for water. While US data centers directly used around 17.4 billion gallons of water in 2023, agriculture used closer to 36.5 trillion gallons — around 2,000× higher. Nevertheless, AI data centers could have significant local effects in regions with less energy availability. And in time, AI data centers might start contributing to climate change in a significant way, if they continue to rely on fossil fuels.”
The Future of AI and Energy - Sam Matey-Coste and Michael Spencer, The Weekly Anthropocene, March 3, 2025
”However when you look at the real-world data, AI’s energy demand surge is relatively tiny compared to the ongoing and exponentially accelerating transformation of humanity’s energy system due to cheap clean electricity generation. This is one of the most important events of the 21st century — and may someday be seen as even more important than AI — but it’s received relatively little coverage due to its comparatively workaday and undramatic nature. Solar power is now the cheapest electricity source in history, and it’s growing exponentially — kicking off what is already by far the fastest energy build-out in history.”Can AI help redesign the technosphere—and so save the planet? - Clark Miller, Nov. 16, 2025
”But nature’s not the problem. We are. Or, more precisely, our relationships with technology. Maybe AI can help.We’ll only know if we point AI away from the Earth and, instead, toward the technosphere. It’s that other engineered planet—the one that humanity has built out of our individual and collective relationships with technology—that we need to learn more about here: how it works, how it might be redesigned to work more sustainably in the future, and how to get our techno-human selves from here to there.”Q3 Global Power Report: No fossil fuel growth expected in 2025 - Ember Energy, Nov. 13, 2025
”No growth in fossil power is expected in 2025 as clean power growth meets all new demand. Record growth in solar power, combined with moderate wind growth, exceeded the rise in electricity demand in the first three quarters of 2025, even when accounting for a fall in hydropower. As a result of fast-rising clean power, fossil generation in the global power sector is expected to remain flat in 2025 for the first time since the Covid-19 pandemic.”
The future
AI skeptics and AI boosters are both wrong - Timothy B. Lee, Oct. 30, 2025
”In short, that 95% failure rate isn’t a sign that AI is useless. It’s a sign that companies are in the early phases of learning about a new technology.This kind of thinking is anathema to people at both ends of the AI debate. The AGI-pilled believe it’s only a matter of time before AI systems become so smart that they can become “drop-in remote workers.” At the opposite extreme, skeptics are reluctant to admit that the technology is useful at all.
But I think the evidence so far supports a middle interpretation: this technology has immense potential. But like previous general-purpose technologies, it’s going to take years, if not decades, to fully unlock it.”
AI Eats the World - Benedict Evans, November 2025
Slides from his latest presentation. “The future always takes time.”
Learn more
Attend my Dec. 5 webinar: AI’s Environmental Impact - Library 2.0 webinar, Dec. 5, 2025
AI Literacy for Library Workers: self-study course - Nicole Hennig
No deadlines.
The course will receive updates once a year – you’ll have perpetual access.
Go at your own pace, come back and review anytime.
You’ll get a certificate of completion when you finish the course.
Comments from previous participants
One of the best courses I’ve ever taken. Highly recommended.
– Brock Edmunds, Assistant Head, Pardee Library, Boston University Libraries
This course really gave me the opportunity to try out so many different AI tools and have that hands on experience I really needed. Plus get a better understanding of generative AI in general. Highly recommend the course!
– Laura Hogan, Reference & Instruction Librarian, Bristol Community College, Fall River, MA
This class offers a great overview of how AI works and some of the ethical issues involved. The assignments also gave me a chance for hands on learning. I only knew of AI as a peripheral thing existing in the world, and this class gave me a great basic understanding. Highly recommended.
—Deb Garbison, Librarian – Hennepin County Library
Being able to work effectively with AI is clearly an important aspect of librarianship now and Nicole’s course will introduce you to a vast range of AI tools with interesting activities to learn how to use them.
—Cynthia Rider, Public Service Librarian, Peninsula Library System, San Mateo, CA
I highly recommend this excellent course. The videos are informative, exciting and short, and I really enjoyed the hands-on activities, it was so much fun that I could try out all these new tools! I liked best the attitude that we were supposed to understand how easy it is to use AI to build a chatbot, generate music or create a podcast from a text.
—Angelika Gulyas, Senior Collection Management Librarian, Central European University, Vienna
And as always, you can follow me on Bluesky or Mastodon where I post daily about generative AI.

