6 Comments
User's avatar
Michelle F. Eble's avatar

I’m excited to see what’s next for adjacent possible. This essay has given me some great ideas for how to incorporate notebooklm into the courses I’m teaching in the fall. Great point about their being so much to read and the invention of the table of contents & index being user interface elements that helps guide this.

Expand full comment
John Mastriani's avatar

I believe a short term gain with the rise of AI is going to be the rise of semantic web analytics outside of big social media and e-commerce companies. The tough part about semantic web analytics is managing all the relationships amongst your data. AI can do it in its metaphorical sleep and will bring this tech down to the layman, small business, and others.

Expand full comment
Gregory's avatar

“If Ezra is experiencing this limitation in his use of AI tools, I would argue that it is a failure of UI not AI -- in other words, the user interface is not allowing him to teach the model about his own particular interests and sensibility”

I’m not sure if making a distinction between AI and its UI is really appropriate, such that a failed experience can be relegated to the interface only.

Here, you are very close to saying that the problem was not even the user interface, but the user himself, because he didn’t prompt correctly.

I’m glad you didn’t say that.

But when we consider that prompting is in fact a problem of the overall experience - what it demands upon us to produce valid or useful results- we start to see that the fact that the UI works the way it does implicates the entire architecture and how the model works- it isn’t just some minor detail at the edge of its profound internals

Expand full comment
Mark Hudson's avatar

I was on the edge of my seat about the 700-word biography of the tribe member, thinking that the AI might have hallucinated it in whole or in part. But I guess it just ended up as being a nice alternate viewpoint, collecting all the her mentions into a single narrative. Still, I worry if the AI might make mistakes. I've seen LLM summaries that inaccurately conflate ideas from two related characters, or even make fundamental misinterpretations based on missing a negative connotation (e.g. disdain vs. admire) or a double-negative (like uncritical vs. unbiased.)

Expand full comment
Andrew Smith's avatar

It is a baby/bathwater scenario for sure. I think about this every day - the magic of cognitive offloading can quickly turn into dependency and then cognitive atrophy.

I think about this when I use voice mode and treat it really differently than I treat a person. We're modifying ourselves to blend in with these new tools in ways we've never done before - at least not so quickly.

Expand full comment