An early look at a new experimental “tool for thought” that I’ve been developing with Google.
About ten months ago, I mentioned here at Adjacent Possible that I was taking on a part-time position at Google as a “visiting scholar,” an admittedly enigmatic title. I promised then that if anything interesting came out of my tenure there I would let all of you know about it first. This email is me living up to that promise.
Earlier today, during their annual I/O conference, Google announced a new experimental product, Project Tailwind. That’s what I’ve been working on.
Anyone who has been reading this newsletter from the beginning can likely make a pretty good guess about the general feature set of this mysterious Project Tailwind. But a quick recap for people who are less familiar with my obsessions: ever since college, I’ve been fascinated by the whole class of software that Howard Rheingold first called “tools for thought” almost forty years ago. During my sophomore year, I lost countless hours building a dream system for managing all my notes using the new Hypercard platform that Apple had introduced. My first book was all about how new software interfaces would shape the way we create and communicate. Later in my career as a writer, I became an unusually vocal early adopter of an information management tool called DevonThink—still thriving—that offered the closest thing to the uncanny semantic “understanding” that we see in the large language models today. I had a hand in helping the folks at Medium define their writing tools in the early days of that platform. Around the same time, with the talented folks at Betaworks, I helped launch a web app for exploring your research notes, called Findings, that is now part of Instapaper. And I’ve written extensively about my own writing tools and workflow, as in this love song to Scrivener, or the creative workflow series here at Adjacent Possible.
So you can probably see where this is going. But there’s no need to speculate—I can just show you what I’ve been up to, working with a brilliant team of collaborators inside of Google. Here’s one of those collaborators, Josh Woodward, explaining Project Tailwind from the stage earlier today:
On the Tailwind team we’ve been referring to our general approach as source-grounded AI. Language models, as we all know, have a habit of hallucinating facts, and while the information contained in the model’s training data is vast, it is also one-size-fits-all. Tailwind allows you to define a set of documents as trusted sources which the AI then uses as a kind of ground truth, shaping all of the model’s interactions with you. In the use case shown on the I/O stage, the sources are class notes, but it could be other types of sources as well, such as your research materials for a book or blog post. The idea here is to craft a role for the LLM that is not an all-knowing oracle or your new virtual buddy, but something closer to an efficient research assistant, helping you explore the information that matters most to you.
Those explorations can take all sorts of forms. We’re discovering new ones almost every day playing around with the product. Josh shows one of our more recent discoveries in the I/O demo: the idea of making an on-the-fly glossary based on a specific topic, in this case, the career of Grace Hopper.
That’s the kind of immediately useful feature that would have been extremely challenging to program in the pre-LLM era: given an arbitrary string of text, potentially tens of thousands of words long, create a list of important and unusual words and phrases related to a specific topic and define them. Another one I love: upload a business plan and some draft marketing materials for a new product, and ask Tailwind to suggest a list of additional features for the product, based on those sources. No software app in the world could do that just a few years ago. Now running that prompt takes five seconds.
I’ve also found that Tailwind works extremely well as an extension of my memory. I’ve uploaded my “spark file” of personal notes that date back almost twenty years, and using that as a source, I can ask remarkably open-ended questions—“did I ever write anything about 19th-century urban planning” or “what was the deal with that story about Houdini and Conan Doyle?”—and Tailwind will give me a cogent summary weaving together information from multiple notes. And it’s all accompanied by citations if I want to refer to the original direct quotes for whatever reason.
There’s much more to say about Project Tailwind, but for now, if you’re interested in getting an early look at it, we’ve put up a signup page. (U.S.-only for now, I’m afraid.) We'll be rolling out beta access to it over the summer. I hope some of you will get a chance to try it out—I suspect the Adjacent Possible community will have invaluable feedback as we develop this idea further.
P.S.: Given my work with Google, I have stopped doing any journalistic writing about contemporary technology in other venues, to avoid any concerns about conflict of interest; here at Adjacent Possible, I plan to use some of these newsletter editions to talk about how I’m using Project Tailwind and share other news about the product, but I will only send those missives to the entire subscriber base. Tools for thought have been central to Adjacent Possible from the beginning, so it would be a waste not to share my thoughts on the actual tool I’m helping Google build. But posts that are sent only to paying subscribers will not have any Project Tailwind-related content in them. And just to be clear: any opinions stated here are my own, not Google’s.
BTW the YouTube embedding seems to be a little funky so if you're having trouble getting to the Tailwind segment, it's at about 2:19 in the video.
Glad anytime someone feels happy about contributing to big ideas.
And your enthusiasm for this project may make Google happy.
But have you checked in with the rest of society?
Or is this model based on Google's street view, where they drove around taking pictures of everyone's home without permission and then said to Congress here is your campaign contribution, and by the way we don't want any regulations.
Is that your secret's plan, too?
It sure seems like Google and Microsoft big tech are happy to steal everyone's thinking over years and just feed it all in to your and other ai with no qualms.
Look at how badly that's turned out for everyone except the top 30% in the world so far. Super polarized society and billionaires building feudal empires while the rest of us pick up the pieces in homelessness, suicide, gun violence and polarized, insulated politicians
I read Naomi Klein in The Guardian this week, and personally will be voting no, and organizing against these giant corporations digital theft model.
In case you haven't read her op ed, let me know what I'm missing.