Attending Reid Hoffman’s lecture at LSE felt like glimpsing the future—but not for the reasons you’d think.
The content wasn’t revolutionary; it was the delivery that stood out. Hoffman read from a teleprompter with the enthusiasm of someone seeing the text for the first time. The language and structure could have been lifted straight from an AI prompt: vague generalities, repeated examples, and metaphors that felt more like padding than insight. AI as a GPS for the mind? Scaffolding us while we scaffold it? If you’ve ever asked ChatGPT to generate a lecture on the future of AI, you’d get something eerily similar.
Hoffman himself openly admits to “co-writing” his latest book with ChatGPT, so I’m fairly certain this was my first experience of an AI-generated lecture. First time for everything, I suppose. But it left me thinking: maybe the future of public events isn’t lectures—it’s discussions.
During the Q&A, Hoffman finally sounded human. The polished yet lifeless delivery gave way to rougher but far more engaging thoughts.
For instance, if a chatbot seems left-leaning, it’s because of bias in data and human reinforcement learning—elite university users skew the training. But the interesting part is the variety. Ask a chatbot to write a suicide scene, and Claude might oblige while Gemini suggests seeking professional help. With different bots, you’re essentially choosing your own bias.
Elia Kabanov is a science writer covering the past, present and future of technology (@metkere).
Illustration by Elia Kabanov feat. Midjourney, photo by Elia Kabanov.
If you like what I’m doing here, subscribe to my newsletter on all things science: