Future Product Days: Future of Product Creators

In his talk The Future of Product Creators at Future Product Days in Copenhagen, Tobias Ahlin argued that divergent opinions and debate, not just raw capability, are the missing factors for achieving useful outcomes from AI systems. Here are my notes from his presentation:




Many people are exposing a future vision where parallel agents creating products and features on demand.
2025 marked the year when agentic workflows became part of daily product development. AI agents quantifiably outperform humans on standardized tests: reading, writing, math, coding, and even specialized fields.
Yet we face the 100 interns problem: managing agents that are individually smarter but "have no idea where they're going"


Limitations of Current Systems

Fundamental reasoning gaps: AI models have fundamental reasoning gaps. For example, AI can calculate rock-paper-scissors odds while failing to understand it has a built-in disadvantage by going second.
Fatal mistakes in real-world applications: suggesting toxic glue for pizza, recommending eating rocks for minerals.
Performance plateau problem: Unlike humans who improve with sustained effort, AI agents plateau after initial success and cannot meaningfully progress even with more time
Real-world vs. benchmark performance: Research from Monitor shows 63% of AI-generated code fails tests, with 0% working without human intervention


Social Nature of Reasoning

True reasoning is fundamentally a social function, "optimized for debate and communication, not thinking in isolation"
Court systems exemplify this: adversarial arguments sharpen and improve each other through conflict
Individual biases can complement each other when structured through critical scrutiny systems
Teams naturally create conflicting interests: designers want to do more, developers prefer efficiency, PMs balance scope.This tension drives better outcomes
AI significantly outperforms humans in creativity tests. In a Cornell study, GPT-4 performed better than 90.6% of humans in idea generation, with AI ideas being seven times more likely to rank in the top 10%
So the cost of generating ideas is moving towards zero but human capability remains capped by our ability to evaluate and synthesize those ideas


Future of AI Agents

Current agents primarily help with production but future productivity requires and equal amount of effort in evaluation and synthesis.
Institutionalized disconfirmation: creating systems where disagreement drives clarity, similar to scientific peer review
Agents designed to disagree in loops: one agent produces code, another evaluates it, creating feedback systems that can overcome performance plateaus
True reasoning will come from agents that are designed to disagree in loops rather than simple chain-of-thought approaches
 •  0 comments  •  flag
Share on Twitter
Published on September 25, 2025 02:00
No comments have been added yet.


Luke Wroblewski's Blog

Luke Wroblewski
Luke Wroblewski isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Luke Wroblewski's blog with rss.