The Week in AI: Deep Research Wars
Perplexity follows OpenAI follows Google with a Deep Research function.
Here’s what you might have missed in the world of AI over the past week.
Perplexity enters the deep research game
On the heels of OpenAI that released its much lauded Deep Research feature for ChatGPT Pro users two weeks ago, Perplexity has now added a similar feature. It’s available to Perplexity Pro users ($20 p/m), which is great news for those who don’t want to shell out $200 p/m for ChatGPT Pro.
(Though OpenAI CEO Sam Altman said on X that their Deep Research feature is coming to Plus (10 uses p/m) and Free (2 uses p/m) users too.)
Performance-wise, Perplexity’s Deep Research seems a little more focused and concise than the ChatGPT version.
Here’s how o1 pro analyzed the output of both models to the same query about Intel stock, where A = Perplexity and B = ChatGPT:
In many ways, Answer A and Answer B complement each other:
• Answer A - more concise, actionable
• Answer B - more narrative, with broader historical illustration.Both are solid, but for an immediate investment decision and a quick grasp of the risk–reward in Intel, Answer A has the edge.
So o1 pro slightly favors the Perplexity answer. 😃
I agree with the breakdown of characteristics: Perplexity’s output feels more concise and actionable, whereas ChatGPT’s is more comprehensive and nuanced, so it depends what you’re after.
New models coming from OpenAI and Anthropic
OpenAI’sAltman has said on X that GPT-4.5 will ship next. After that, they’ll ship GPT-5, which will include all their different models rolled into one package — putting an end to this madness:
This GPT-5 bundle includes the full o3 model, which I still find hard to grasp and somewhat scary based on this chart of its performance:

Rumor has it though that Anthropic is also set to launch a new reasoning model within the next few weeks.
Their Claude 3.5 was my favorite for a long time, now partially overtaken by o1 pro. But Dylan Patel of SemiAnalysis said on the Lex Fridman podcast: “Word in the San Francisco street is that Anthropic has a better model than o3.” 🤯
And then there’s Elon Musk, who’s coming with Grok 3 today, which he claims is the “smartest AI on earth.” Whether that’s true we’ll see, but one thing is for sure: lots of increased artificial intelligence coming our way in the weeks ahead.
Recommended reading and listening
🗞️ Reasoning best practices (OpenAI): This guide from OpenAI explains the difference between GPT and reasoning models. It also has an overview with prompting best practices at the end — must-read because these are different from how you’d prompt a GPT model.
🗞️ Deep Research and Knowledge Value (Stratechery): Great background piece on the longer term effects of Deep Research features on (knowledge) work. (Complement with Every’s Does OpenAI’s Deep Research Put Me Out of a Job? which, well, the title says it all.)
🗞️ The $200 AI Question: Should You Upgrade to ChatGPT Pro? (Animalz): I wrote this piece for Animalz a few weeks ago. It’s focused on content marketers but even if you’re in a different field it gives you a sense of what o1 pro is and isn’t good for.
🎧️ #459 – DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters (Lex Fridman podcast): The “DeepSeek moment” already feels like a months ago in AI time. Still, this episode by Lex Fridman is a great (and loooong) listen if you want to get a masterclass on the current AI (dare I say) landscape. Everything from technical workings of LLMs to geopolitics gets covered — highly recommended if you can find the time. 😃