AI wants to help
AI tools that do remarkable things are appearing so often it's impossible to keep up. I have just tried the innocuously named NotebookLM from Google and think you'll be interested in the results.
In June I posted an article (pdf) called "Where does the science of reading go from here?" I uploaded it to NotebookLM, which "read" it in a few seconds. I then had it generate a summary, an FAQ, and a podcast-like discussion of the article.
The summary and FAQ were excellent. I could quibble about minor details, but hey, they are only summaries. You still have to read the original article to get the full story.
The "podcast" between two very lifelike bots is astonishing. Again, some details are off, but I didn't catch any flagrant factual errors. For example, in the text I used SoR as the abbreviation for "science of reading," which it pronounced "soar." Not a bad idea, actually. "Soar" could refer to the educational approach, leaving "science of reading" to refer to the field of research, which is the meaning of the phrase outside of education.
Another flag that it was AI-generated: One of the talkers says something like "this is important because it's written by Mark Seidenberg [pronounced correctly!], he's an expert and people listen to him" which greatly exaggerates my influence. Still, it's uncanny how realistic the discussion is and it's very accurate (though a little long). The AI model has certainly nailed the podcast format. I assume that it was trained on Emily Hanford's APM podcast documentaries, among other things, and the female speaker even sounds a bit like her. Which would be both creepy and flattering, I suppose.
I'm posting the AI generated summary and FAQ, and a link to the podcast because they may help people understand my original article.
On the notebookLM web page, google states that they will not use AI-generated text as training data. This is huge concern, of course, because using such imperfect, possibly error-ridden texts to train the models has the potential to propagate misinformation. Google's assurance seems pretty shallow: they might not directly use the generated text as training data, but they presumably will if the text is posted somewhere else (e.g., on a blog like this).
I recently heard an AI expert say the world was dividing into people who are engaging with the new AI and those who are staying away because they just don't want to deal with such a massive development. That sounds about as correct as any "there are two kinds of people in this world" generalization can be. This stuff certainly makes me glad I'm not a new assistant professor trying to navigate teaching and research. The landscape seems a lot more treacherous now.
The original article. The AI summary and FAQ (here). The "podcast".