July 7, 2025
Hi everyone!
If you’re new here, welcome! There has been a lot of news to keep up with recently. So lets jump right into it.
In this issue we’ll look at:
Deep Background GPT Released: The world's best AI fact-checking tool is now available to all as a completely free GPT - Mike Caulfield (this is very cool!)
➡️ I now have a self-study version of my AI Literacy course on Udemy, so that may be of interest if you prefer to have no deadlines and want ongoing access to the course. (on sale until Aug. 5 for $44.99, usually $99)
AI music video-The World’s Got A Groove - Kelly Boesch AI Art on YouTube (I love what she’s doing with AI video — see more on her channel).
10 AI Video Trends Taking Over the Internet - The AI Daily Brief, YouTube (fun overview)
AI Fluency: The AI Fluency Framework - Anthropic (useful free course)
Does using ChatGPT change your brain activity? Study sparks debate - Nicola Jones, Nature (oh boy, was there a lot of debate!)
And the usual tips, accessibility stories, what’s happening in education and libraries, some “just for fun” items, thought-provoking stories, and stories about the future.
Enjoy!
Imaginary cat person: “Time bends in my presence. What you call future is but a ripple in the eternal sea. You stand at the edge of revelation. Do you dare step forward?”
Image: Midjourney, animation: Hedra
Foundation Models
Anthropic
Use artifacts to visualize and create AI apps, without ever writing a line of code - Anthropic
”(Note that the artifacts tab is available on free, pro, and max plans, and currently does not exist on mobile). Creating your first artifact: The best way to create an artifact is to describe a problem you want to solve, then let Claude help you refine it. Here's a step-by-step approach.” See also, Anthropic just made every Claude user a no-code app developer.
OpenAI
Sam Altman Says GPT-5 Coming This Summer, Open to Ads on ChatGPT—With a Catch - AdWeek
”OpenAI CEO Sam Altman announced on a new company podcast today that GPT-5 is expected to launch this summer, marking the next major leap in the company’s generative AI capabilities. However, he did not disclose a specific date.”
OpenAI launches o3-pro AI model, offering increased reliability and tool use for enterprises — while sacrificing speed - Emilia David, VentureBeat
”Just hours after announcing a big price cut for its o3 reasoning model, OpenAI made o3-pro, an even more powerful version, available to developers. o3-pro is “designed to think longer and provide the most reliable responses,” and has access to many more software tool integrations than its predecessor, making it potentially appealing to enterprises and developers searching for high levels of detail and accuracy.” (It’s expensive, though).
Updates to Advanced Voice Mode for paid users - ChatGPT release notes
”We're upgrading Advanced Voice in ChatGPT for paid users with significant enhancements in intonation and naturalness, making interactions feel more fluid and human-like. When we first launched Advanced Voice, it represented a leap forward in AI speech—now, it speaks even more naturally, with subtler intonation, realistic cadence (including pauses and emphases), and more on-point expressiveness for certain emotions including empathy, sarcasm, and more.”
Google
Forget about AI costs: Google just changed the game with open-source Gemini CLI that will be free for most developers - Sean Michael Kerner, VentureBeat
”Today Google announced its open-source Gemini-CLI that brings natural language command execution directly to developer terminals. Beyond natural language, it brings the power of Google’s Gemini Pro 2.5 — and it does it mostly for free. The free tier provides 60 model requests per minute and 1,000 requests per day at no charge, limits that Google deliberately set above typical developer usage patterns. Google first measured its own developers’ usage patterns, then doubled that number to set the 1,000 limit.”
Google launches production-ready Gemini 2.5 AI models to challenge OpenAI’s enterprise dominance - Michael Nuñez, VentureBeat
”The Alphabet subsidiary promoted two of its flagship AI models—Gemini 2.5 Pro and Gemini 2.5 Flash—from experimental preview status to general availability, signaling the company’s confidence that the technology can handle mission-critical business applications.”
Microsoft
Is Microsoft’s new Mu for you? - Mike Elgan, Computerworld
”By processing data directly on the device, Mu keeps personal information private and responds instantly. This shift also makes it easier to comply with privacy laws in places like Europe and the US since no data leaves your computer.
The industry is moving in this direction for obvious reasons. SLMs are now powerful enough to handle focused tasks on par with larger cloud-based models. They are cheaper to run, use less energy, and can be tailored for specific jobs or languages.” (SLMs = small language models)
Other models
MiniMax-M1 is a new open source model with 1 MILLION TOKEN context and new, hyper efficient reinforcement learning - Carl Franzen, VentureBeat
”Chinese AI startup MiniMax, perhaps best known in the West for its hit realistic AI video model Hailuo, has released its latest large language model, MiniMax-M1 — and in great news for enterprises and developers, it’s completely open source under an Apache 2.0 license, meaning businesses can take it and use it for commercial applications and modify it to their liking without restriction or payment.”
Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm - Jae Lee, VentureBeat
”With OpenAI reportedly spending $7 to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending $7 billion or $8 billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change.”
Magistral - by Mistral AI
“Announcing Magistral — the first reasoning model by Mistral AI — excelling in domain-specific, transparent, and multilingual reasoning.”
Agents
What I learned trying seven coding agents - Timothy B. Lee
”It’s impossible to truly understand a tool if you haven’t used it yourself. So last week I decided to put seven popular coding agents to the test.
I found that right now, these tools have significant limitations. But unlike the computer-use agents I wrote about yesterday, I do not view these coding agents as a dead end. They are already powerful enough to make programmers significantly more productive. And they will only get better in the coming months and years.”
Images, video, voices, and music
10 AI Video Trends Taking Over the Internet - The AI Daily Brief, YouTube
Interesting overview of what’s happening with AI video since Veo 3. (21 minutes)
‘Surpassing all my expectations’: Midjourney releases first AI video model amid Disney, Universal lawsuit - Carl Franzen, VentureBeat
”Popular AI image generation service Midjourney has launched its first AI video generation model V1, marking a pivotal shift for the company from image generation toward full multimedia content creation.”
Midjourney Isn’t the Most Accurate AI—That’s Why It’s the Best - Lucas Crespo, Every
'“After I found some images I liked, I noticed a button that said “copy prompt.” Being able to read the prompt of an image is like trying a dish and seeing the recipe. Once you learn it, you can play with it and make it your own. I began to bounce ideas off of Midjourney’s user base. I wrote prompts about Every's values and vision for the future, and found myself in a rabbit hole of classical art, vintage illustration styles, Greco-Roman architecture, and eventually Dadaism, pop art, and postmodernism.”
AI art is better off bad: Reclaiming the beautiful error - Harry Law, Learning from Examples
”The most honest AI art puts artificiality to work. Images that look machine-generated but use aesthetic distance to reveal truths about its subject, or texts that sound alien but illuminate aspects of language we take for granted. Glitch artists have been doing something like this for a long time. What they did with early digital systems, we can do with transformers, diffusion models or generative adversarial networks. The key is carefully orchestrated failure that reveals patterns invisible to conventional seeing.”
Eleven v3 Audio Tags: Precision delivery control for AI speech - Ryan Morrison, ElevenLabs
”Using tags like [pause], [rushed], [stammers], or [drawn out], you can adjust how each sentence lands — not just emotionally, but rhythmically. That control turns flat delivery into performance.”
Let Them Talk: Audio-Driven Multi-Person Conversational Video Generation - Shenzhen Campus of Sun Yat-sen University, 2Meituan, 3Division of AMC and Department of ECE, HKUST
”Audio-driven human animation methods, such as talking head and talking body generation, have made remarkable progress in generating synchronized facial movements and appealing visual quality videos. However, existing methods primarily focus on single human animation and struggle with multi-stream audio inputs, facing incorrect binding problems between audio and persons. Additionally, they exhibit limitations in instruction-following capabilities. To solve this problem, in this paper, we propose a novel task: Multi-Person Conversational Video Generation, and introduce a new framework, MultiTalk, to address the challenges during multi-person generation.”
Too Much, Too Little, Never Just Right? The Labelling Dilemma of Article 50 of the EU AI Act - Information Labs on LinkedIn
(People are getting “warning fatigue” when it comes to labels on AI generated content).
”The evidence base is clear: current approaches to AI transparency are failing to achieve their stated goals. Warning labels don't fundamentally change behavior. Universal labeling requirements ignore sectoral differences and cultural contexts. And technical watermarking solutions are still evolving. This doesn't mean abandoning transparency, quite the opposite. It means designing evidence-based systems that account for human psychology, technological limitations, and real-world implementation challenges.”
‘Spokesperson’ for AI ‘Band’ Velvet Sundown Now Says He Is an Imposter - David Browne, Rolling Stone
For the full story, read I am Andrew Frelon, the guy running the fake Velvet Sundown Twitter. It just gets more and more funny.
Tips
Using AI Right Now: A Quick Guide: Which AIs to use, and how to use them - Ethan Mollick
AI Fluency: The AI Fluency Framework - Anthropic
Useful, free course. I finished it and enjoyed it.Prompting Techniques for Specialized LLMs - AI for Education
The CORRECT way to use ChatGPT (in 2025) - Jeff Su, YouTube
(He talks very fast, but these are good tips).I Fed My Voice to 10 Free AI Voice Cloners. Only One Nailed It. - Daniel Nest, Why Try AI
NotebookLM is adding a new way to share your own notebooks publicly - Google Labs
Deep Background GPT Released: The world's best AI fact-checking tool is now available to all as a completely free GPT - Mike Caulfield
Try this excellent GPT from Caulfield, the author of the SIFT method for information literacy. Here’s a live demo of his tool used on Claude - the same prompt powers it.How to Fix Your Context - dbreunig.com
My offerings
I’ve recently updated our five generative AI tutorials for the University of Arizona Libraries. Feel free to try them and refer others to them.
The next session of my online course, AI Literacy for Library Workers, will begin on October 7. The registration link from Infopeople isn’t ready yet, but I’ll add it to that page when it is. You can also sign up to be notified of future sessions if you like. I plan to make some major updates to the course, so feel free to take it again if you like (others have). Here’s what previous participants said about the course.
➡️ I now have a self-study version of the course on Udemy, so that may be of interest if you prefer to have no deadlines and want ongoing access to the course.
(on sale until Aug. 5 for $44.99, usually $99)
Accessibility
Google Adds Captions to Gemini Live Conversations - Steven Aquino, Curb Cuts
“When you launch Gemini Live on Android or iOS, a rectangular captions button appears in the top-right corner. Tapping will enable a floating box that provides a transcript of Gemini’s responses. (This does not show what you’re saying in real-time, but that remains available in the full text transcript after ending the conversation.),” Li wrote. “It appears near the middle of the fullscreen interface in audio mode, and at the top when video streaming is enabled. These three lines of text cannot be moved or resized. In Gemini > Settings, there’s a new ‘Caption preferences’ item underneath the ‘Interrupt Live responses’ on/off toggle that links to system settings on Android.”
Neurodivergent Users Are Making AI Better for Everyone - Michael Spencer and Natalia Cote-Munoz
”The unexpected lessons from AI's most innovative user community—and what they teach us about inclusive design.”
Creating Alt Text for Digital Collections with AI: A Case Study - Peter Broadwell, Lindsay King, Choice
”Furthermore, the potential of implementing vision language models locally extends beyond alt text generation to allow better discovery across library digital collections. Image collections with less metadata also would benefit from vision-language models’ ability to project (or “embed”) visual features of images into a mathematical representation of language, enabling text-based searching across the collection in multiple languages.”
These Transcribing Eyeglasses Put Subtitles on the World - Boone Ashworth, Wired
”TranscribeGlass are smart eyeglasses that aim to do exactly what it says on the tin: transcribe spoken conversations and project subtitles onto the glass in front of your eyes. They’re meant for the Deaf and, primarily, the hard-of-hearing community who struggle to read lips or pick out a conversation in a loud room.”
What’s happening in education
Tackling AI Through Play: How the Pedagogy of Play can help us learn about AI - Jaime Chao Mignano
”Playing with AI was rewarding. As an educator, I was even more excited that it was meaningful. It drew me into more careful attention to the interactions I was having with AI, making connections and considering opportunities, and offered itself up for analysis as I built explanations on how it was “deciding”. I was chewing on all of this information and I was motivated to keep experimenting and iterating. Playful learning is a strong entry point to tackle learning about AI “…because the learners care, [so] the result is often deep learning (A Pedagogy of Play).”
Empowering Learners for the Age of AI: An AI Literacy Framework for Primary and Secondary Education - European Commision, OECD, and Code.org
”This framework is designed for teachers, education leaders, education policymakers, and learning designers. It outlines competences and learning scenarios to inform learning materials, standards, school-wide initiatives, and responsible AI policies for primary and secondary education settings.”
The Butler vs The Sparring Partner: Reframing the AI Relationship for Students - Mike Kentz
Butler vs sparring partner: ”Butler Approach: "ChatGPT, write me a 500-word essay about climate change." Sparring Partner Approach: "I'm arguing that climate policies should prioritize adaptation over mitigation. Help me identify the strongest counterarguments to this position so I can address them effectively." “
What’s happening in libraries
Institutional Books 1.0: A 242B token dataset from Harvard Library's collections, refined for accuracy and usability - Matteo Cargnelutti, et al.
” … this technical report introduces Institutional Books 1.0, a large collection of public domain books originally digitized through Harvard Library's participation in the Google Books project, beginning in 2006. Working with Harvard Library, we extracted, analyzed, and processed these volumes into an extensively-documented dataset of historic texts. This analysis covers the entirety of Harvard Library's collection scanned as part of that project, originally spanning 1,075,899 volumes written in over 250 different languages for a total of approximately 250 billion tokens.”
How One College Library Plans to Cut Through the AI Hype - Kathryn Palmer, Inside Higher Ed
”“The library is taking the lead to be the interdisciplinary hub,” he said, adding that centralizing such services at the library can help avoid the silos that department-specific AI guidelines often create. “Libraries are for everyone; anybody should feel like they can ask us for help and consultation on AI.”
To make that happen, Stony Brook’s library—which helped launch the university’s AI Innovation Institute earlier this year—is building a team of information professionals focused on helping students and faculty navigate the practical and ethical considerations of using AI.”
New book: Generative AI and Libraries: Claiming Our Place in the Center of a Shared Future - by Michael Hanegan and Chris Rosser, American Library Association
I like how they frame generative AI as an “arrival technology.”
”An arrival technology fundamentally reshapes society regardless of individual choice or adoption. Unlike elective technologies—such as smartphones, which we can choose to use or not—arrival technologies transform the underlying fabric of how society functions.” They also say that “there is effectively no opt-out strategy that is sustainable or effective.” I look forward to reading the whole book. Download the first chapter for free.
Navigating Artificial Intelligence for Cultural Heritage Organisations - Open access book from UCL Press
”Navigating Artificial Intelligence for Cultural Heritage Organisations explores the innovative technologies and approaches to digitised and born-digital records within libraries and archives across the UK and US, and beyond. It brings together chapters from experts across the fields of digital humanities, computer science and information science, alongside professionals within the library and archival sector.”
Librarian Leadership on the AI Frontier: Whitepapers - Technology from Sage
”Over half of students report using AI tools like ChatGPT in their research, but just 8% said their librarians supported them in their use of AI.
Despite this, student trust in librarians remains high, with more than half saying they’d feel more confident using AI tools if recommended by their librarian.”
Just for fun
Using Runway character reference feature to place yourself in historical imagery and then animating them - Peter Gostev on LinkedIn
Cats doing high dives and dogs doing high dives and all sorts of things doing high dives
"We Are Not PROMPTS!" - Voesis Studios on YouTube
AI music video-The World’s Got A Groove - Kelly Boesch AI Art on YouTube
The Forgotten War on the Walkman - Louis Anslow, Pessimists Archive
My video doorbell has had a busy day - Jan Loos on Facebook
Thought-provoking
New research & techniques
Beyond static AI: MIT’s new framework lets models teach themselves - Ben Dickson, VentureBeat
”The researchers’ solution is SEAL, short for Self-Adapting Language Models. It uses a reinforcement learning (RL) algorithm to train an LLM to generate “self-edits”—natural-language instructions that specify how the model should update its own weights. These self-edits can restructure new information, create synthetic training examples, or even define the technical parameters for the learning process itself.
Intuitively, SEAL teaches a model how to create its own personalized study guide. Instead of just reading a new document (the raw data), the model learns to rewrite and reformat that information into a style it can more easily absorb and internalize. This process brings together several key areas of AI research, including synthetic data generation, reinforcement learning and test-time training (TTT).”
Surveys
2025: The State of Consumer AI - Menlo Ventures
”While Gen Z (ages 18-28) leads overall AI adoption as expected, Millennials (ages 29-44) emerge as power users, reporting more daily usage—flipping the typical “younger = higher usage” pattern. But AI is not just a young person’s game: Nearly half (45%) of Baby Boomers (ages 61-79) have used AI in the past six months, with 11% using it daily.”
Copyright
Why Cloudflare’s Pay Per Crawl Is a Trap for 99% of Websites - Bill Hartzer
At first read, this method from Cloudflare sounds promising, but it seems there are many problems with it. See also The Cloudflare Trap. I think the Creative Commons ideas are better, see the following article.
Introducing CC Signals: A New Social Contract for the Age of AI - Creative Commons
”Creative Commons (CC) today announces the public kickoff of the CC signals project, a new preference signals framework designed to increase reciprocity and sustain a creative commons in the age of AI.” You can give feedback here.
Perplexity Rejects BBC’s Legal Claims Over AI-Driven News Content Reuse - Pymnts
”The FT reported that it dismissed the BBC’s claims as “manipulative and opportunistic,” asserting that the broadcaster fundamentally misunderstands technology, the internet and intellectual property law. The company maintains that it does not build or train foundational models but rather provides an interface for users to access models from OpenAI, Google and Anthropic, with its own system based on Meta’s Llama and refined for accuracy.”
AI Co. Anthropic Nabs Partial Fair Use Win in Copyright Case - The Fashion Law
”Training LLMs = Fair Use; Scanning Purchased Books = Fair Use; Pirated Copies and Indefinite Retention = Not Transformative”
Anthropic’s multi-billion dollar loss in Bartz v. Anthropic is really a win (for AI) - Matthew Sag
”And yet, while Judge Alsup condemned the use of pirated books, he also embraced a view long promoted by the tech industry: that the intermediate copies generated in the course of training large language models are highly transformative and almost invariably fair use.”
Meta wins on fair use for now, but court leaves door open for “market dilution” - Dave Hansen and Yuanxiao Xu, Authors Alliance
”Compared with the speculative and unprecedented arguments advanced in this ruling, we find Judge Alsup’s decision offers a more grounded and sensible approach to fair use. Going forward, we hope more courts would follow Judge Alsup’s example and maintain a realistic assessment of transformativeness and market harm. Fair use should be flexible, but not so much that it can accommodate every speculative theory of harm.”
Getty drops key copyright claims against Stability AI, but UK lawsuit continues - Rebecca Bellan, TechCrunch
”The move doesn’t end the case entirely — Getty is still pursuing other claims as well as a separate lawsuit in the U.S. — but it underscores the gray areas surrounding the future of content ownership and usage in the age of generative AI.”
Disney and Universal sue Midjourney for copyright infringement - Andres Guadamuz, TechnoLlama
”Anyone who thinks that this lawsuit will kill generative AI is sorely mistaken, Disney does not want to destroy AI, it wants to use it. … So my theory is that this is sending a message to all of the other AI image generators and AI developers to be careful, stay in line, mind your output filters, or you could be next. Midjourney is small, it is expendable in the grand scheme of things. Disney may want to work with OpenAI, Google, Microsoft, and Meta in the future, so no need to start attacking potential partners, particularly as they have made very clear that they will be using generative AI themselves.”
AI copyright anxiety will hold back creativity - Nitin Nohria, MIT Technology Review
”Our copyright system has never required total originality. It demands meaningful human input. That standard should apply in the age of AI as well. When people thoughtfully engage with these models—choosing prompts, curating inputs, shaping the results—they are creating. The medium has changed, but the impulse remains the same: to build something new from the materials we inherit.”
Climate
From Shrugging Face to Sustainable AI: In Defense of Pragmatism - Josh Gellers, Medium
”Fourth, we could accept that AI is becoming woven into the fabric of our existence and still try to influence its trajectory to better align with our values and aspirations. As you may have suspected, I am a firm believer in the promise of option #4. But what kinds of policies might this perspective entail? The focus would shift from debating the technical minutia regarding the emissions generated directly by model development and use to putting our energy into greening AI infrastructure. In brief, we need to build greener buildings (i.e., data centers) and power them using renewable energy.”
Computing is efficient - Andy Masley
”First, data centers are specifically designed to be the most energy efficient way of storing and transferring information for the internet. If we got rid of data centers and stored the internet’s information on servers in homes and office buildings, we’d need to use a lot more energy."
“Every single action online, including accessing websites, streaming videos, and using online applications, ultimately involves traversing information through data centers. Data centers store basically all the content of the internet, process it, and control where it gets sent. If the internet can be said to exist in physical places, it’s almost entirely in data centers.
What percentage of our electricity grid would you expect data centers to take up given that Americans spend half their waking lives online? 50%? 25%? The answer turns out to be 4.4%”
How much energy are you using on AI? - Mirko Lorenz, DataWrapper
”In practical terms, the energy requirements of ChatGPT use are negligible compared to other household appliances. And this applies to your overall personal footprint as well: If you want to reduce your energy usage there are many other lifestyle changes that have a bigger effect, like going vegan, avoiding flights, or using a bike instead of car.”
The Biggest Statistic About AI Water Use Is A Lie - SE Gyges
”This claim is a lie. It ran in The Washington Post, in an article provocatively titled “A bottle of water per email: the hidden environmental costs of using AI chatbots”. ChatGPT almost certainly does not consume a bottle of water when writing one email and never has.”
From Fix the News (a good newsletter)
”Meanwhile, Trump may accidentally be accelerating the clean energy transition. The administration's 'energy dominance' agenda, aimed at boosting fossil fuel exports, is having an unexpected effect: making other countries nervous about oil dependency and pushing them faster toward renewables. In trying to stall the energy transition, he may be speeding it up.”
Rewiring the energy debate - Daan Walter, Sam Butler-Sloss, and Kingsmill Bond
“But this fashionable pessimism misses the bigger picture. Amid all the noise, one thing is clear: new energy tech is growing consistently, and with increasing impact. Today, the world invests twice as much in new energy technologies as it does in fossil fuels. In 2024 alone, the increase in solar generation matched the entire electricity demand of Germany.”
Explanations
Context engineering - Simon Willison
”The term context engineering has recently started to gain traction as a better alternative to prompt engineering. I like it. I think this one may have sticking power.”
Reinforcement learning, explained with a minimum of math and jargon - Timothy B. Lee
Excellent, clear explainer.
Software is changing (again) - YouTube
It’s worth watching this keynote. Andrej Karpathy introduces the idea that software is undergoing a fundamental change, identifying three stages: Software 1.0 (traditional code), Software 2.0 (neural networks), and the newest stage, Software 3.0 (large language models). (Summary by Gemini - scroll to bottom)
The most important graph from the Digital News Report, and not repeating the mistakes of 2003 - Thomas Baekdal
”This is very much the problem we see. The news today doesn’t feel useful. Instead, it feels like a never ending form of doom-scrolling. And the worst part is that many newspapers are optimizing for even more doom-scrolling by turning things that people can’t do anything about into live streams that they suddenly have to watch ‘every minute’.” …. and …..“The debate about what AIs should be allowed to do continues to cause havoc in our industry, with publishers going in every direction at the same time. But, I think there is a debate here that we are missing. When it comes to how AIs work, we have a tendency to just put everything in the same box, but, when you think about it, there is a huge difference between the functions: train, learn, use.”
How To Think About AI: A Guide For The Perplexed, by Richard Susskind
I’m enjoying this excellent new book with a balanced view of things. See this review.
Responses to AI critics
Does using ChatGPT change your brain activity? Study sparks debate - Nicola Jones, Nature
”Scientists warn against reading too much into a small experiment generating a lot of buzz.”
See thoughts from the authors after this study generated so many headlines.
“Is it safe to say that LLMs are, in essence, making us "dumber"?
No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "brain damage", "passivity", "trimming" , "collapse" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it.”
Prompting is Managing: No, LLMs aren't creating “Cognitive Debt” - Venkatesh Rao
”The experiment clocks students as if writing were artisanal wood-carving—every stroke hand-tooled, originality king, neural wattage loud. Yet half the modern knowledge economy runs on a different loop entirely: delegate → monitor → integrate → ship. Professors do it with grad students, PMs with dev teams, editors with freelancers. Neuroscience calls that stance supervisory control. When you switch from doer to overseer, brain rhythms flatten, attention comes in bursts, and sameness is often a feature, not decay.”
Is this your brain on ChatGPT? - Sean Goedecke
”If LLMs mean your brain has to work less hard to pump out an essay response to a milquetoast SAT prompt, that doesn’t mean your brain is working less hard overall. It means your brain can work on other things!”
The Thinking Person's Guide to Not Thinking About AI - Carlo Iacono
(Harsh, but true).
”Every new piece of AI research lands in our collective consciousness not as data to be understood but as ammunition for a culture war that was decided before it began. We are watching the most sophisticated act of intellectual theatre in recent memory, where every player knows their role, every line is rehearsed and the conclusion was written before the curtain rose. The pattern is so predictable it borders on parody. A technical paper emerges from some prestigious institution. Its title, invariably crafted for maximum impact, promises to reveal what we always suspected: that artificial intelligence is either a fraud, a danger, or both. Within hours, the usual suspects emerge from their digital lairs, brandishing the abstract like a trophy. The headlines write themselves. The op-eds flow like water. The social media victory laps begin before anyone has read past page three.”
Why the "Everything is Hallucination" Argument Makes Me Roll My Eyes - Mike Caulfield
”But saying there is no distinction between a hallucination and normal output makes the same error as those who believe that everything that comes from the model is truth. It confuses distinctions in process with distinctions about the relation of the model output to the world outside it, and ignores that hallucinations — like “radar clutter” — are not random and tend to occur in particular situations that we can be aware of and mitigate through either better technology or better education. The people who address these issues will be the ones who study the unique features of hallucinatory and non-hallucinatory system function, not people who argue the entire distinction is irrelevant.”
A Shill for AI...Responding to criticism in the moment and after the fact. - Lance Eaton, AI + Education = Simplified
”Recently, at the conclusion of a talk, someone started off the Q&A referring to me as a “shill for AI.” It was a rather interesting moment in my public speaking experience, where the critique came at me personally rather than the technology. This post is about my reaction in the moment, what I would I have liked to have said, and why it matters how we’re navigating GenAI in the discourse.”
The Apple "Reasoning Collapse" Paper Is Even Dumber Than You Think - Mike Caulfield
Do reasoning AI models really ‘think’ or not? Apple research sparks lively debate, response - Carl Franzen, VentureBeat
Apple's 'Al Can't Reason' Claim Seen By 13M+, What You Need to Know- AI Explained, YouTube
Good analysis from one of my favorite channels.
Thoughts on authorship
Authorship in the Age of AI: Why “writing in my voice” is still my writing - Carlo Iacono
”A well‑trained model can mirror a writer’s cadence so accurately that the result feels more Carlo than a rushed 11 p.m. email. Like a studio microphone, the tool captures texture the naked ear might miss. What some fear as flattening uniformity frequently becomes heightened distinctiveness.
When routine drafting takes minutes instead of hours, creators reinvest the saved bandwidth in higher‑order labour: refining nuance, stress‑testing evidence, or simply reflecting longer before hitting “send”. Far from outsourcing sincerity, AI augments the very conditions under which sincerity flourishes. Typewriters were “soulless”. Word‑processors “made prose too easy”. Each wave of contention subsided as norms adjusted. Today’s concern about AI‑mediated text echoes earlier moral panics but underestimates society’s proven ability to recalibrate trust cues.”
AI Signals The Death Of The Author - David J. Gunkel, NOEMA
”From one perspective — a perspective that remains bound to the usual ways of thinking — this can only be seen as a threat and crisis, for it challenges our very understanding of what writing is, the state of literature and the meaning of truth or the means of speaking the truth. But from another, it is an opportunity to think beyond the limitations of Western metaphysics and its hegemony. LLMs do not threaten writing, the figure of the author, or the concept of truth. They only threaten a particular and limited understanding of what these ideas represent — one that is itself not some naturally occurring phenomenon but the product of a particular culture and philosophical tradition.”
Beneficial uses of AI
Don’t Blindly Blame the Bots: How AI Is Helping Journalism - Information Labs on LinkedIn (several interesting use cases)
”The case of iTromsø, a small Norwegian newsroom, and its AI tool "Djinn" is particularly insightful. "Djinn" is an AI-powered data journalism interface designed to enhance newsgathering, analysis, and story summarisation. Crucially, iTromsø journalists participated in training the AI, leading to high user adoption because the system was "well-explainable and built around users". Journalist and data scientist Nikita Roy lauded Djinn as "one of the best use cases of AI deployment in the service of journalism," highlighting its ability to help smaller newsrooms uncover original stories despite scarce resources.”
Machine Readable: Language models are opening new avenues for inquiry in historical research and writing - Steven Johnson
”There's a general point worth making here, … all the key ideas that are being generated are either coming from me, or from the human-authored works that I have collected. NotebookLM is effectively functioning as a conduit between my knowledge/creativity and the knowledge stored in the source material: stress-testing speculative ideas I have, fact-checking, helping me see patterns in the material, reminding me of things that I read but have forgotten. It's an important point to make in part because there seems to be a growing concern about offloading the actual research/reading phase to AI-generated summaries.”
Merging AI and underwater photography to reveal hidden ocean worlds - MIT News
”The models can both generate new, synthetic, but scientifically accurate images unconditionally (i.e., requiring no user input/guidance), and enhance real photographs conditionally (i.e., image-to-image generation). By integrating AI into the photographic workflow, Ellenbogen will be able to use these tools to recover detail in turbid water, adjust lighting to emphasize key subjects, or even simulate scenes that would be nearly impossible to capture in the field. The team also believes this approach may benefit other underwater photographers and image editors facing similar challenges. This hybrid method is designed to accelerate the curation process and enable storytellers to construct a more complete and coherent visual narrative of life beneath the surface.”
Can AI speak the language Japan tried to kill? - Jessie Lau, BBC
“I hope this kind of AI can help people in Hokkaido, Ainu ancestors or young people, to learn the Ainu language," says Kawahara. He suggests that the technology could enable virtual avatars – Ainu teaching assistants that guide young learners of the language. Kawahara's team also hopes to capture more Ainu dialects with AI and include content from younger generations, not just old recordings, he says.”
AlphaGenome: AI for better understanding the genome - Ziga Avsec and Natasha Latysheva, Google DeepMind
”AlphaGenome’s generality allows scientists to simultaneously explore a variant's impact on a number of modalities with a single API call. This means that scientists can generate and test hypotheses more rapidly, without having to use multiple models to investigate different modalities.”
New AI Transforms Radiology with Speed, Accuracy Never Seen Before - Ben Schamisso, Northwestern University
”In a major clinical study, the tool boosted productivity by up to 40 percent without compromising accuracy.”
World first: brain implant lets man speak with expression — and sing - Miryam Naddaf, Nature
”A man with a severe speech disability is able to speak expressively and sing using a brain implant that translates his neural activity into words almost instantly.”
Mayo Clinic’s AI tool identifies 9 dementia types, including Alzheimer’s, with one scan - Susan Murphy, Mayo Clinic
”The tool, StateViewer, helped researchers identify the dementia type in 88% of cases, according to research published online on June 27, 2025, in Neurology, the medical journal of the American Academy of Neurology. It also enabled clinicians to interpret brain scans nearly twice as fast and with up to three times greater accuracy than standard workflows.”
The future
Some ideas for what comes next - Nathan Lambert, Interconnects
”The part of o3 that isn’t talked about enough is how different its search feels. For a normal query, o3 can look at 10s of websites. The best description I’ve heard of its relentlessness en route to finding a niche piece of information is akin to a “trained hunting dog on the scent.” o3 just feels like a model that can find information in a totally different way than anything out there.”
…. and
”Much like o3, you should play with Claude Code even if you don’t code a lot. It can make fun demos and standalone websites in no time. It’s miles ahead in its approachability compared to the fully-autonomous agents like Codex (at least for the time being).”
Like humans, AI is forcing institutions to rethink their purpose - Gary Grossman, VentureBeat
”This requires design principles that are neither technocratic nor nostalgic, but grounded in the realities of the migration underway, based on shared intelligence, human vulnerability and with a goal of creating a more humane society. That in mind, here are three practical design principles.”
Learn more
If you want to learn more about generative AI, contact me about doing a webinar or course for your group. Last spring I offered multiple webinar series — for Boston Library Consortium, Penn State Libraries, and Mayo Clinic Libraries. I’m taking a break for the summer, but will be open to doing more in the fall.
➡️ Sign up with this link to get the self-study version of my AI Literacy course for $44.99 instead of $99, through August 5).
»» Sign up here if you’d like to be notified of future courses I’m offering.««
And as always, you can follow me on Bluesky or Mastodon where I post daily about generative AI.