12 Comments
User's avatar
The AI Physician's avatar

Every time NotebookLM releases an update, I’m blown away all over again. Ever since I stumbled onto it last fall, right after the audio overview feature launched, it’s become my favorite AI tool by a mile. And I say that as someone who uses Gemini and ChatGPT every single day, plus OpenEvidence for medical literature and Chartnote as my AI scribe in clinic. NotebookLM, though, occupies its own category. It’s not just a search tool. It has quietly become my central hub for thinking, learning, and organizing my own knowledge.

I’ve used it to explore topics I never would have dug into before. I went from watching *Vikings* on Netflix to learning the real history of the first-millennium Norse world. Then somehow that turned into reading about molten-salt nuclear reactors, which led to a deeper understanding of fission and fusion than I ever had back in my physics and engineering days before medical school. NotebookLM didn’t just give me answers. It made the learning addictive.

And now these new Slide Deck and Infographics features? They’re on a whole different level. They hit me the same way audio overviews did when they first came out. The funny thing is that when I show this stuff to people, only a handful seem as excited as I am. I can’t tell if it’s because they don’t quite grasp how transformative this tech is, or if they just don’t enjoy learning the way I do. But honestly, if the Debbie Downers would give it ten minutes, I think they’d be hooked.

Most of us have said at least once in our lives, “If only I had this back when I was in school.” NotebookLM is the first time I’ve said that and actually meant it. In high school, college, and especially medical school, this would have changed everything. One “Anatomy” notebook. Drop in my notes, textbook chapters, atlas images, lecture audio, whatever I had. Then I create flashcards, quizzes, slide decks, audio overviews for when I’m at the gym. It’s crazy to even imagine!

NotebookLM has made learning fun for me again. That’s not something I expected to feel in my fifties. And I can’t wait to see where you take this next. At a time when the news is always talking about how AI is leading to unemployment, the dumbing down of the population, and possibly doomsday, I wanted to thank you for building an AI tool that makes the world better in a very real way.

Steven Johnson's avatar

Ah, that's great to hear. I shared your note with the team -- so motivating for us to hear stories like this!

David's avatar

The slide deck gave me a queasy feeling because the "training data" used by the AI model to generate the imagery is copyright works by human artists. (Also queasy-making: the Escher-like involutions of the tome pictured in your final slide. Kinda sloppy in the AI sense.)

A big part of the reason the established peer-reviewed model is collapsing is due to the strain of genAI invading scientific research. GenAI "papers" of a quality that hovers around dogshit have overwhelmed the editorial desks of academic journals to the point that some have simply shut down. Then there's lazy or unscrupulous reviewers using genAI to summarize and write their peer reviews for them, and *then* there's the emerging practice of embedding directives to the genAI "reviewer" to give a favourable assessment regardless of actual quality. It's perfidy all the way down.

I see the potential upside of tools like Notebook LM, I really do, but my god the negative consequences of genAI tools are already here, and they're mostly in the range between distressing and horrifying.

Steven Johnson's avatar

It's a really interesting question about the imagery. Obviously I fully support licensing deals to use protected IP for any sort of training, whether it's images or books or any other kind of content. But I also think the period graphic novel illustration style that I asked for in this deck is now part of the public domain as a style -- just like, say, bluegrass music or detective fiction is. You don't need to purchase the rights to compose a new rhythm-and-blues song; the general conventions of that genre belong to everyone at this point. And so to my mind there's a case for being able to invoke a visual style like this one using AI.

Maybe the thornier question is what it means for the existing illustrators who can create these sorts of images by hand. Obviously, in my case, the slide deck I created for the post has no effect on their livelihoods, because I would have never commissioned an illustrated slide deck before. So the clear upside here--if the quality is good, which I generally think it is—is more people having access to visualization tools that would have never been available to them before. That seems like an obvious win. The question is what impact these tools have on traditional illustrators. Is there a segment of the existing market for illustration that will stop paying for human artwork because they can generate it themselves? Or do these tools allow talented illustrators to expand their creative range and productivity and increase the value of their work? Maybe both turn out to be true?

Alex Wright's avatar

Love this framing, and the retro slide deck is genius. This got me thinking about the early twentieth-century documentalists like Paul Otlet and Suzanne Briet, who also wanted to move past the article as the default unit of knowledge. Their idea was to break things down into smaller, reusable pieces and let expert curators build dynamic dossiers, atlases, and encyclopedias that could evolve over time. H.G. Wells had his own version of this idea with knowledge-worker "samurai" feeding a World Brain. Anyway, great stuff—excited to see where you and the NotebookLM team take this next.

Steven Johnson's avatar

Yes exactly! Also Benjamin's Arcades Project. The funny thing is that the original proposal I wrote for FEED back in 1994 had this idea that we would publish smaller blocks of text that could be interlinked in multiple ways, instead of just straight linear articles. But it turned out to be too complicated (or in a way, too simple) because each unit had to be self-contained so that it could be read on its own, or in a different context and still make sense. So there was no room to develop a complex idea.

Rick Bolin's avatar

I’ve had disappointing results from AGI searches. Although the summaries and references appear to be very beneficial, I’ve found the error rate very high. Until they eliminate the hallucinations, I wouldn’t put too much faith into it. Contrary to most of the glowing articles in the press, many of the corporations that have been trying AGI are disappointed. I’m worried that AGI is going to lead to a new tech bubble that will decimate the economy.

Erica Kleinknecht O’Shea's avatar

Also intrigued by the AI-peer review moderator notion. Disagreement between reviewers is all too common and editorial decision making can seem like a coin toss. While LLMs are not inherently objective, it seems to track that an LLM could review a manuscript, the sources cited in that manuscript, AND two peer review commentaries and then suggest a way forward. The job then of managing editor would be to keep human reactions reasonably in the loop. Interesting!

Andy's avatar

It seems like the key difference is that the new format is 2-directional…but is that really true? Still feels like a notebook is not fundamentally different than a paper in this way (a few papers, and you can ask questions, etc), but there is no mechanism to contribute back to the knowledge- that still has to come in through existing mechanisms (new papers) via the AI scraping them from the web (deep research). If you were to draw a diagram of the flow of units of knowledge, perhaps the unit is changing but not the direction.

Erica Kleinknecht O’Shea's avatar

I’ve not played around with deep research much but am intrigued. Wondering what the hallucination rate is? Or is there a vetting process built in to ensure that all sources actually exist?

Steven Johnson's avatar

The sources all definitely exist -- there's no risk of hallucination there. (You can inspect them directly in Notebook or via links to confirm.) The risk for hallucination would be in the report that is generated, or any chat engagement (or artifact creation) that you have after the sources have been collected. But with a grounded, citation-based UI like Notebook's the hallucination rate is much lower and it's easy to fact check everything.

Rick Bolin's avatar

I tried 3 different AGI programs on a simple task …. finding the right oil filter for my car. The info on relative filter efficiency was helpful, but when I asked for specific part numbers, I got errors from every program. Telling them they made a mistake got an “I’m sorry” message. Once I got two apologies in a row. When I asked two of the to grade themselves on their responses (once I pointed out all the mistakes), one graded a C- and the other a D+.