Inside the Minds of AI Pioneers: Lessons from Ezra Klein's Podcast
Nine takeaways from three insightful Ezra Klein podcasts on AI.
Ezra Klein, an award-winning journalist and New York Times podcaster, recently did three shows on AI:
How Should I Be Using A.I. Right Now? with
, Wharton professor and author of "Co-Intelligence: Living and Working With A.I."Will A.I. Break the Internet? Or Save It? with Nilay Patel, cofounder and editor-in-chief of The Verge.
What if Dario Amodei Is Right About A.I.? with Dario Amodei, Anthropic's cofounder and CEO, and former OpenAI researcher.
The full episodes are worth listening to, but here are my nine takeaways from these shows about AI's current state and future.
1. AI is approaching human-level persuasion
Anthropic's internal tests show that their latest model, Claude 3 Opus, creates arguments nearly as persuasive as humans. Their research paper states:
"While the human-written arguments were judged to be the most persuasive, the Claude 3 Opus model achieves a comparable persuasiveness score, with no statistically significant difference."
Ezra suggests AI could become even more effective at changing minds by engaging in extended conversations and adapting to individual responses and styles. A/B tests could further refine its persuasion tactics.
In the short term, professionals like marketers and lawyers can benefit from these models' capabilities. In the longer term, AI's usage in marketing, political campaigns, and scams could become more powerful and concerning, raising ethical questions about AI shaping public opinion and decision-making.
2. We’re unprepared for AI's impact on society and mental health
Amodei suggests some people will adapt well to AI, while others may struggle, just as some now manage attention-driven technology well and others don’t. This assessment is overly optimistic and inaccurate.
Few people manage their relationships with current technology well. Everyone has problems with concentration, reaches for their phone first thing in the morning, and checks social media too often.
Believing the situation will be easier with AI is unrealistic, particularly if AI models aim for maximum engagement and persuasion. Given our current struggles with addictive tech, it's unlikely our minds, children, and society can withstand AI's adverse effects without guardrails and support — or that Big Tech companies will prioritize user well-being over revenue.
3. AI models have distinct personalities
One well-known trick to improve AI performance is giving it a role or personality, like "be an excellent copywriter" or "answer like a Starfleet commander." But the models themselves also exhibit unique personalities: ChatGPT is described as neutral and informative, Claude as warm and engaging, and Google's Gemini as friendly and helpful to a fault.
Even the creators of these models don't fully understand the emergence of these differences. Amodei suggests AI personalities result from "tunable approaches" decided by system makers, but the exact mechanisms remain unclear. Are these personalities a byproduct of the training data, the model architecture, or a combination of factors?
Understanding and noticing the personalities helps you find the model that best suits your needs. For example, I prefer Claude's thoughtfulness and eloquence for creative tasks, coaching, and analyzing meetings. ChatGPT's more neutral and informative style makes it my go-to tool for troubleshooting and quick questions.
4. AI models even puzzle their creators
Companies like Anthropic and OpenAI research their own AI models, meaning even the creators of these Large Language Models (LLMs) don't fully grasp them. This "interpretability" research focuses on understanding AI's decision-making process and inner workings.
It's somewhat concerning that AI's creators don't fully comprehend these systems, but it also shows anyone can discover new possibilities, quirks, and use cases for AI models.
For example, a recent study found that standard AI models can predict a patient's race from medical images, something human experts can't do. Even more astonishing, the researchers don't fully understand how the AI accomplishes this.
5. Use the "Best Available Human" standard to evaluate AI
The "Best Available Human" (BAH) framework assesses an AI's performance on a task. As Mollick explains, "Is the A.I. more or less accurate than the best human you could consult in that area?" By comparing the AI's output to that of the most skilled person in a field, you can understand its strengths and weaknesses.
Say you're a marketing professional considering using AI to generate ad copy. You can compare the AI's output to the work of your best copywriter. If the AI consistently produces copy as good as or better than your top human writer, you can be confident in its abilities in this area.
The BAH standard is most effective for tasks in your own area of expertise. Focusing on familiar tasks helps you assess the AI's capabilities.
6. Even AI's creators don’t have answers to the big questions
Amodei often doesn’t give clear answers to the potential problems Klein raises about AI’s impact.
Amodei admits discomfort about the concentration of power in a few AI companies but is unsure how oversight should work given commercial interests. Similarly, on AI's potential to disrupt the economy, he explains, "I suspect there's some different method of economic organization that's going to be forced," but concedes, "I don't have the answer to that." He also acknowledges the uncertainty around educating children and planning their careers for an AI-driven future.
AI proponents often claim the technology will accelerate scientific progress. Amodei echoes this sentiment, citing its potential to speed up drug discovery. When Klein presses for specifics, Amodei's response is vague, mentioning only logistical tasks like streamlining clinical trial participant sign-ups.
These examples show AI's creators and proponents often give plausible-sounding answers that fall apart under scrutiny. Whether they don’t have the answers or don’t want to give them doesn’t matter that much; it’s concerning either way and shows ongoing critical thinking, public discourse, and oversight for AI's responsible development is essential.
7. Anthropic doesn't use your data to improve its models
Klein mentions he now mostly uses Claude Opus, Anthropic's advanced AI model. That got me excited: he’s using the same model as me! I imagined a collective intelligence, with influential figures like Klein and other smart users enhancing the AI with every interaction.
As I fact-checked this section, I learned my assumption was incorrect. According to Anthropic’s privacy policy, they don’t train their model on your data:
"We will not use your Inputs or Outputs to train our models, unless: (1) your conversations are flagged for Trust & Safety review (in which case we may use or analyze them to improve our ability to detect and enforce our Acceptable Use Policy, including training models for use by our Trust and Safety team, consistent with Anthropic’s safety mission), or (2) you’ve explicitly reported the materials to us (for example via our feedback mechanisms), or (3) by otherwise explicitly opting in to training."
OpenAI is trickier: it doesn't use data from its API but may use information from ChatGPT. Still, this discovery surprised me and should alleviate some concerns about data privacy.
8. AI will force new internet business models to emerge
The flood of AI content could break the internet's ad-driven business model, compelling media companies, platforms, and users to find new paths. "Somewhere in there all of this stuff does break," Patel says. "And the optimism that you are sensing from me is, well, hopefully we build some stuff that does not have these huge dependencies on platform companies that have no interest at the end of the line except a transaction."
This split internet, divided between AI-powered engagement and human-curated authenticity, presents challenges and opportunities. New media properties must build audiences without traditional channels overwhelmed by AI content, while AI could empower users to curate their online experiences.
9. AI's exponential growth outpaces society's adaptation
The contrast between AI's rapid development and society's slower reaction and adoption is striking.
"For the whole period of 2021 and 2022, even though we continued to train models that were better and better, and OpenAI continued to train models, and Google continued to train models, there was surprisingly little public attention to the models," Amodei recounts. "And then, all of a sudden, when ChatGPT came out, it was like all of that growth that you would expect, all of that excitement over three years, broke through and came rushing in."
Amodei believes we're just hitting the steep part of the exponential curve now. He sees Artificial General Intelligence (AGI) not as a fixed milestone but as part of a smooth curve of increasing capabilities.
As AI advances rapidly, society will struggle to keep up and make sense of the changes. We need to develop ways to manage its risks and benefits at each stage of development. This will be essential for harnessing its potential while minimizing the negative impacts on our lives and work.
Balancing AI’s benefits and risks
AI's development brings opportunities and risks. Still, I'm optimistic and excited about its potential to amplify human creativity, problem-solving, and productivity. But we must all develop a basic understanding of AI to reap its benefits and confront the dangers.
I recommend listening to the full episodes of Ezra Klein's podcast for more in-depth insights on AI. If you need to pick one out of the three, here’s some guidance:
Nilay Patel's episode is relevant if you're in content, media, or marketing, as it dives into how AI could disrupt and reshape these industries.
Dario Amodei's interview is a must-listen for those interested in AI's societal effects and the challenges of developing safe systems.
Ethan Mollick's episode is a great resource for AI beginners who want practical tips on integrating these tools into their work and lives.
Each conversation offers unique perspectives and insights that deepen your understanding of this transformative technology.
See you next time!
Tim