Jurgen Appelo's Blog

November 27, 2025

The Agentic Organization

Building AI Highways to Bypass Human Traffic JamsHumans are the bottleneck. Agentic organizations separate AI workflows from human processes, like ring roads that bypass city centers. Dual-lane processes should prevent intellectual traffic jams.

Driving through Peru last month taught me more about organizational design than most MBA programs ever could.

Picture this: you're cruising along on a highway, minding your own business, when suddenly the road dumps you straight into the heart of some random city you've never heard of. No warning. No bypass. No escape route. One minute you're doing highway speeds; the next you're crawling past vendors hawking empanadas and ceviches while three-wheeled taxis cut you off and camouflaged speed bumps strategically hidden every fifty meters try to wreck your rental car.

Half an hour of pure chaos later, you emerge on the other side of town, the rental car miraculously intact, wondering why anyone thought it was brilliant urban planning to funnel every piece of through-traffic directly past the town square. Sometimes, my partner and I would just give up, park the car, and grab lunch while contemplating the obvious: there had to be a better way to let fast-moving traffic skip a tour of the local mercado.

Turns out, this traffic challenge perfectly illustrates what's broken in many organizations today.

The Theory of Constraints, dreamed up by Dr. Eliyahu M. Goldratt, cuts through management BS with surgical precision. Find the single biggest bottleneck in your system. Fix it. Everything else is theater. The approach doesn't care about your org chart, your processes, or your feelings; it cares about throughput. Goldratt's five-step process is beautifully ruthless: identify the constraint, try to get the most out of it, subordinate everything else to it, then elevate it to another level, and repeat. No fluff, no improvement committees, just relentless focus on what actually matters now.

Apply this to traffic, and the solution becomes rather obvious. When high-speed traffic gets tangled with low-speed traffic, the slow stuff becomes the constraint that drags everything down to its sluggish pace. Solution: build a ring road! Keep the highway traffic moving around the city instead of straight through it. Add proper on-ramps and off-ramps so people can switch lanes safely when they need to. Everyone wins: speedsters keep their velocity, locals get their peace, and nobody loses their chassis over a speed bump.

This same principle shows up everywhere, though most organizations are too busy admiring AI-generated meeting notes and PowerPoint slop. Fast-moving value streams crashing into slow-moving process steps? System-wide slowdown. High-speed trains sharing railroad tracks with freight wagons? Delays for everyone. Broadband internet packets pushing themselves through a landline? Digital Stone Age. The pattern is universal: when you mix different speeds on the same infrastructure, you get the worst of both worlds.

But here's what many people forget: the slow lane enables the fast lane, not the other way around. Those highways don't build themselves. They're constructed, maintained, and funded by the people puttering around in their tuk-tuks. Speed doesn't equal importance. Strategy—real strategy—almost always happens in the slow lane, not the fast one.

Which brings us to the organizational intelligence revolution that most companies can still only dream of.

Do you like this post? Please consider supporting me by becoming a paid subscriber. It’s just one coffee per month. That will keep me going while you can keep reading! PLUS, you get my latest book Human Robot Agent FOR FREE! Subscribe now.

The Non-Agentic Trap

Walk into any "AI-powered" organization today and you'll probably witness a masterpiece of inefficiency. Employees are prompting ChatGPT, wrestling with Copilot, and sweet-talking Gemini all day long. They share their clever prompts in Slack channels, celebrate their AI wins in all-hands meetings, and generally feel very cutting-edge about their human-machine collaboration.

But look a bit closer. Every AI interaction starts with a human typing something and ends with a human reading something. The AIs never talk to each other. They execute nothing independently. They make no meaningful decisions or kick off workflows without a human standing there, hand on the wheel, ready to intervene. This is humans-in-the-loop territory, where artificial intelligence exists purely as a fancy autocomplete for human workflows.

This is the equivalent of those Peruvian towns with no bypass. All organizational intelligence—human and artificial—gets funneled through the same bottleneck: the human brain. And guess what happens? Everything slows down to human speed. The AI might be capable of processing thousands of documents per second, but it has to wait for Susan from product management to read the summary, think about it over lunch, discuss it in Tuesday's meeting, and maybe, just maybe, decide what to do about it by Friday.

The constraints here are Susan's brain, Susan's calendar, and Susan's need to feel important by being in every decision loop. The result is organizational traffic jams that would make a Peruvian city center blush.

The Agentic Alternative

Now imagine a different world. Employees orchestrate AI agents that can work autonomously, share results directly with other AI agents, initiate workflows without human approval, and manage entire value streams from start to finish. Humans are still crucial, but they're watching from the sidelines, monitoring dashboards, setting strategic direction, and intervening only when something needs the slow, deliberate thinking that humans excel at.

This is humans-on-the-loop: artificial intelligence running its own native workflows at AI speed while humans focus on what humans do best. The AIs take the highway; the humans work the town square.

This isn't science fiction. Some companies are building these systems right now. They're creating separate infrastructure for AI-driven value streams, letting machines talk to machines while humans focus on the strategic work that actually benefits from human judgment, creativity, and wisdom.

The transformation isn't just about speed—though speed matters when you're trying to spot risks and seize opportunities faster than your competitors. It's about using each type of intelligence where it adds the most value. AI excels at processing massive amounts of data, recognizing patterns, and executing well-defined tasks at superhuman velocities. Humans excel at setting context, making judgments with incomplete information, and thinking strategically about wicked problems.

Building the Bypass

Agentic organizations don't happen by accident. They require deliberate architectural choices that most companies are afraid to make. You need to build a separate infrastructure for AI workflows, with clear protocols for when and how humans intervene. You need to design handoff points where human intelligence and artificial intelligence can collaborate without creating bottlenecks. And you need to be ruthless about identifying which processes truly need human involvement versus which ones are being slowed down by human ego.

Most importantly, we need to accept that artificial intelligence doing its work without constant human supervision is our next human challenge. The goal isn't to keep humans in control of everything; it's to let humans control what matters while letting machines handle everything else.

We need to accept that artificial intelligence doing its work without constant human supervision is our next human challenge.

This requires a fundamental shift in how we think about work. Instead of asking, "How can AI help humans be more productive?" the question becomes, "What work should humans do, and what work should go to the machines?" The first question keeps you trapped in the non-agentic model, where AI is just a fancy tool in human-first workflows. The second question opens up the possibility of true organizational intelligence that operates at multiple speeds simultaneously.

Do you like this post? Please consider supporting me by becoming a paid subscriber. It’s just one coffee per month. That will keep me going while you can keep reading! PLUS, you get my latest book Human Robot Agent FOR FREE! Subscribe now.

Different Lanes, Different Speeds

I'm betting my career on agentic organizations. As I described earlier in "Scrum Is Done," we need to operate at multiple speeds without everything grinding to a halt. Strategic thinking happens slowly, deliberately, with humans wrestling with ambiguous problems and making judgment calls that shape the organization's direction. Operational execution happens fast, with AI agents processing transactions, updating systems, generating reports, and handling routine decisions without human intervention.

The magic happens when these different intelligences enable rather than obstruct each other. Humans set the strategic direction and design the guardrails. AI agents execute within those parameters at AI speed. When edge cases arise or strategic pivots are needed, the system should know how to escalate to human judgment. When humans make strategic decisions, those decisions get translated into AI-executable workflows that run autonomously.

Humans set the strategic direction and design the guardrails. AI agents execute within those parameters at AI speed.

You know you've built an agentic organization when the machines can safely ignore the humans for any kind of work where humans are more than happy to be ignored. Let people focus on what they do best: thinking slowly, creatively, and strategically about hard problems. Let the AI focus on what it does best: processing information and executing defined workflows at superhuman speed.

That's the real promise of the agentic organization: not replacing human intelligence, but giving it room to operate where it adds the most value while everything else runs in the fast lane. Different intelligences, different speeds, different infrastructure—each optimized for what it does best.

Humans are the bottleneck.

I look forward to a future without needless traffic jams around the town square. No more bottlenecks disguised as collaboration. Just clear lanes for different types of work, with proper on-ramps and off-ramps for when you need to switch between them.

And next time, I hope Peru will have a few more ring roads.

Jurgen

 •  0 comments  •  flag
Share on Twitter
Published on November 27, 2025 05:19

November 19, 2025

It's Not Always Cold in the True North

AI demands radical honesty about company values, mission, and principles in the future of work.AI cannot interpret corporate bullshit or fill gaps with common sense like humans do. As AI systems make more decisions on behalf of companies, vague mission statements and meaningless values become operational liabilities that expose the gap between what organizations claim to stand for and what they actually do.

More than half of traditional corporate enterprises list "Integrity" as a corporate value. Beautiful. Makes the shareholders feel all warm inside. Of course, these are also the same companies incentivizing employees to crush quarterly targets first and maybe tell the truth later if there's bandwidth left over. Need proof? Wells Fargo has collected more scandals than Prince Andrew, all while "integrity" gleamed from their corporate website like a participation trophy.

That charade is ending.

AI isn't just rewiring how companies operate. It's forcing them to face who they actually are. Those cringe-worthy mission statements and inspirational wall art floating around corporate office are about to become operational code. No more hiding behind managerial poetry.

When algorithms make decisions, they won't charitably interpret your intentions, fill gaps with common sense, or translate your corporate word salad into something workable. They'll do exactly what you tell them to do. Which means you better know what your organization actually stands for, not what sounds good in the annual report.

The AI revolution has delivered an unintended consequence: it demands radical honesty about organizational identity. Companies that thought they could coast on generic corporate speak are discovering that artificial intelligence has zero tolerance for executive fluff and managerial nonsense.

The Illusion of North

Let's start with a humbling cosmic truth: cardinal directions are complete human fiction. "Go West" made for a catchy disco tune, but in the vast indifference of space, it's meaningless advice. Despite what leadership gurus preach, there is no "True North"—not in business, not anywhere.

In the universe's grand scheme, up and down are quaint suggestions. Even on this spinning rock called Earth, "North" is just a convenient lie—a cognitive crutch we invented to avoid wandering in circles. Other species manage fine with "warmer" versus "colder," but humans needed something more abstract to feel superior.

We pointed at the Polestar or magnetic fields and collectively agreed to pretend a specific direction held objective truth. Cardinal directions? Pure fabrication. But a remarkably useful one.

This shared delusion prevents collective paralysis. Marco Polo, Christopher Columbus, and Roald Amundsen would testify that this convenient fiction helped them reach places they'd never have found otherwise. Without agreed-upon coordinates, "forward" becomes opinion, and collaboration becomes chaos.

Without this invented framework, teams drift toward whatever feels comfortable instead of working toward what matters. We create these coordinates not because they're carved into reality's bedrock, but because alignment among humans requires a shared narrative—a story we can all believe in. Stories about there (where we want to go) versus here (what we're escaping), whether those destinations are real or imagined.

This is organizational identity's function. Purpose, Vision, Mission, Values, and Principles serve as our narrative compass points. They're mental constructs—intangible and invented—but they're the useful fiction that keeps groups moving in the same direction rather than meandering toward whatever seems warmest.

Do you like this post? Please consider supporting me by becoming a paid subscriber. It’s just one coffee per month. That will keep me going while you can keep reading! PLUS, you get my latest book Human Robot Agent FOR FREE! Subscribe now.

Purpose: Your Existential Anchor

Organizational purpose answers the most fundamental question any entity faces: Why should you keep existing? Not "how do you extract profit." (That's survival mechanics, not purpose.) We're talking about the difference you're attempting to make, the human need you address, the reason anyone should care you showed up.

Purpose functions as your North Star—that fixed reference point that remains constant while everything else shifts. Microsoft's purpose wrestles with human potential and technological empowerment. That's not marketing copy; it's an existential commitment that survives strategy pivots and market chaos.

Most companies mistake busy work for purpose. They confuse what they do with why they matter. Real purpose transcends quarterly earnings calls and survives leadership changes.

Purpose: Why do we exist? (my example)

To reimagine collaboration in the age of intelligent agents.

We help organizations shift from hierarchical control to networked alignment—enabling ethical, adaptive collaboration between humans and machines.

We don't champion any single platform, protocol, or product. We build conditions for a future where no one actor owns how we work together.

Vision: Your Destination Beacon

Vision statements answer a deceptively simple question: What do we want to become? Not your current reality, but the compelling future state you're building toward. Think of it as your organizational GPS destination—the point on the horizon that helps navigate when paths get messy.

Purpose is your moral compass (why you exist); vision is your destination (where you're headed). Purpose connects to larger human needs and rarely changes. Vision peers 5-10 years ahead and describes the world you're trying to create.

Google's vision "to provide access to the world's information" paints a clear picture of barrier-free knowledge. Effective vision statements walk a tightrope: ambitious without being delusional, inspiring without being vague, specific enough for direction while broad enough for creative execution. Most fail because they're either too grandiose to believe or too generic to be useful.

Vision: Our Destination Beacon (my example)

We're building toward Networked Agentic Organizations (NAOs)—where autonomous agents and empowered people co-create value across open, evolving ecosystems.

In this future, work is negotiated, decentralized, and grounded in mutual respect—between humans, machines, and the systems they inhabit.

Mission: Your Current Job Description

Mission is what you actually do right now. Unlike vision (future aspiration) or purpose (existential why), mission lives in the present tense. Purpose provides inspiration, vision provides destination, mission provides the current route.

IKEA's mission—"To offer a wide range of well-designed, functional home furnishing products at prices so low that as many people as possible will be able to afford them"—is as straightforward as their assembly instructions (theoretically). The best missions balance specificity for direction with flexibility for growth.

Most mission statements fail because they're either so broad they're meaningless ("providing solutions") or so narrow they become straightjackets. The sweet spot delivers operational clarity that enables strategic flexibility.

Mission: Our Current Job Description (my example)

We cultivate a community of practice—and practice what we preach—by developing a pattern language for structuring, distributing, and governing work between humans and agents.

Values: Your Moral DNA

Company values represent fundamental beliefs about what matters—your organizational DNA influencing everything from boardroom decisions to break room conversations. They shape how you treat employees, customers, and the world. When authentic, they deliver tangible benefits: cultural cohesion, talent magnetism, engagement, and autonomous decision-making at scale. When inauthentic... well, you still make headlines, just not the kind you want.

The difference between bland, ineffective values like "integrity" and specific ones like Patagonia's "protecting the home planet" shows why specificity matters. Specific values guide real decisions instead of decorating conference rooms (or bringing gleeful smiles on the faces of news reporters).

Values only work when they're authentic. Leadership hypocrisy—saying one thing, doing another—doesn't just neutralize values; it breeds active cynicism. In our transparency-obsessed age, that cynicism spreads fast and costs more than most executives realize.

Values: Our Moral DNA (my example)

What do we stand for, no matter what? (Borrowed from the book Human, Robot, Agent)

Fairness
We treat others equitably without bias. Humans and AIs must both commit to just outcomes.

Reliability
We show up, follow through, and deliver. Trust emerges through consistent performance—from people and machines.

Safety
We protect others from harm—physical, emotional, and systemic. Safety is non-negotiable.

Inclusivity
We create space for everyone. We amplify marginalized voices and build systems serving the full spectrum of humanity.

Privacy
We respect boundaries and protect data. Dignity requires discretion—from all agents.

Security
We defend what matters. Resilient systems—and people—must resist manipulation and protect the commons.

Accountability
We own our actions and consequences. No excuses—from anyone or anything.

Transparency
We explain our reasoning. Whether human or algorithmic, decisions must be traceable and honest.

Sustainability
We consider impact beyond the short term. The future of work can't come at the planet's expense.

Engagement
We approach work with curiosity and care. Great collaboration—human or artificial—feels energizing, not extractive.

Principles: Your Behavioral Translation

Principles bridge the gap between moral ideals and Monday morning decisions. While values might declare "we believe in honesty," principles spell out what that means operationally: "we never mislead customers, even when it costs us a sale."

Values are your moral compass—core beliefs about right and wrong. Principles are your detailed map—specific, actionable rules for navigating daily decisions. Values answer "what we believe." Principles answer "how we act on those beliefs."

Amazon's Leadership Principles like "Customer Obsession" and "Invent and Simplify" function as operational tools making abstract values tangible and testable. They might be executive theater, but they're theater with impact, evidenced by Amazon's mind-bending growth over three decades.

Principles: Our Behavioral Algorithms (my example)

How do we act on our values—concretely? (Borrowed from the book Human, Robot, Agent)

We Watch the Interconnected Environment
Design for complexity, not simplicity. Track second-order effects, enable cross-boundary flow, and treat systems as entangled webs—not isolated silos.

We Focus on Sustainable Improvements
Solve problems at the root. Prioritize long-term impact over short-term fixes, and invest in change that compounds over time.

We Prepare for the Unexpected with Agility
Build flexibility into everything. Use options, scenarios, signals, and slack to stay ahead of volatility and shift with confidence.

We Challenge Mental Models with Diversity
Break the bubble. Invite difference, confront assumptions, and use cognitive variety to unlock creative breakthroughs.

We Seek Feedback and Learn Continuously
Build tight feedback loops. Make reflection routine, and treat every input as an opportunity to adapt and evolve.

We Balance Innovation with Execution
Don't just invent—deliver. Create space to explore, but anchor it with discipline and direction.

We Take Small Steps from Where We Are
Start local, think systemic. Use light interventions to trigger larger change, and leverage what's already in motion.

We Push for Decentralized Decision-Making
Distribute control. Trust autonomous teams, and let coordination emerge through clear interfaces—not top-down command.

We Grow Resilience and Anti-fragility
Don't just survive—get stronger under stress. Design for bounce-back, adaptation, and opportunity in disruption.

We Scale Out with a Networked Structure
Structure for emergence. Connect through platforms, protocols, and peer networks—not rigid hierarchies.

The Codification Challenge

In AI-enabled organizations, our narrative frameworks must translate into machine-readable guidelines, decision criteria, and algorithmic constraints. This means embedding ethical guidelines directly into how AI systems make decisions. Your Purpose, Vision, Mission, Values, and Principles (PVMVP) need to inform executable code.

Traditional enterprise PVMVP statements read like template exercises:

"To be the leading [industry] company providing innovative [products/services] that create value for our [stakeholders] while maintaining the highest standards of [virtue]."

These might be technically correct, but they're completely forgettable. They provide zero real guidance because they could apply to virtually any company in any industry. And no AI agent can act on them.

Artificial intelligence isn't just adding complexity to organizational guidelines—it's fundamentally changing the rules. Traditional organizations survived with vague or inconsistent "guidance" because humans naturally filled gaps, interpreting unclear directions through cultural context and personal judgment. AI systems lack that interpretive charity.

When an AI system optimizes for "customer engagement," it pursues that goal relentlessly regardless of whether engagement comes from valuable content or addictive misinformation. This creates what researchers call the "explicit encoding" challenge—a corporate version of Nick Bostrom's paperclip problem.

In AI-dependent organizations, directional frameworks can't remain implicit cultural knowledge. They must become machine-readable guidelines, decision criteria, and algorithmic constraints. The shared story must be written so machines can understand and act on it.

The Governance Opportunity

Beyond the inevitable codification challenge, several opportunities emerge:

Dynamic Navigation

Accelerating technological change compresses strategic time horizons. Ten-year visions and five-year missions might need reassessment every other year, perhaps more often. Organizations that encode their PVMVP compass can build adaptability into operational frameworks while maintaining stability for meaningful guidance. The story can be rewritten much more often.

Analytical Partnership

AI's analytical capabilities can reveal whether companies actually live their stated PVMVP frameworks. Machine learning algorithms analyze employee surveys, customer feedback, and social media sentiment to identify gaps between stated values and actual behavior. AI processes market trends, customer needs, and technological possibilities to detect discrepancies and nudge narrative refinement.

Algorithmic Governance

As PVMVP frameworks become operational through code, AI systems can enforce principle-aligned policies automatically—routing resources toward projects aligning with company values, screening candidates for cultural fit, or flagging decisions conflicting with stated principles. This represents a shift from forgettable guidelines to algorithmic guardrails. The narrative structure as an actual navigation system.

However, one crucial challenge remains:

When AI agents make decisions based on programmed values and principles, who's responsible for the outcomes? New accountability frameworks must address shared responsibility between human designers and AI systems, differentiating between human error, AI system failure, and emergent AI behavior conflicting with intended principles.

Stakeholders expect companies to demonstrably live up to stated intentions. AI-enabled capabilities amplify both positive and negative organizational capabilities. Companies with genuine, well-implemented frameworks can use AI to scale impact dramatically. Organizations engaging in purpose-washing or operating with misaligned systems will find their false stories exposed more quickly and publicly than ever.

What counts is a cohesive story.

It's All a Fantasy

Fantasy writers love pretending they're running climate simulations when they're just slapping "cold north, warm south" on maps because it's easy and nobody wants to explain axial tilt to a dragon.

Game of Thrones epitomizes this lazy approach: Westeros behaves like our Northern hemisphere... except winter lasts however long the plot needs. The Wheel of Time does the same trick—ice and snow up by the Blight, balmy kingdoms in the south—because why redesign a planet from scratch when you can borrow European temperature settings? Even The Lord of the Rings follows the same pattern: Mordor simmers in the southeast while the frigid Forodwaith sits conveniently above everything else.

It's not worldbuilding; it's narrative convenience. Authors map ecosystems where cold "north" equals danger and mystery while warm "south" equals clothing-optional comfort and exotic spices. When I once challenged Tad Williams about using this same device in his Otherland and Shadowmarch series, he seemed offended, as if it was beneath him to acknowledge reusing the same tired storytelling trick.

But I understand the pragmatism. Story rules over everything else. Time wasted explaining alternative compasses to readers is distraction. We need easy agreement on what's where, then focus on what matters: where's the treasure (usually east or west) and where's the undead lair (typically north).

In business adventuring, the same storytelling rules apply.

Do you like this post? Please consider supporting me by becoming a paid subscriber. It’s just one coffee per month. That will keep me going while you can keep reading! PLUS, you get my latest book Human Robot Agent FOR FREE! Subscribe now.

The Human-Centered Paradox

AI has created an unexpected imperative: organizations must become more human-centered, not less. The technology promising to automate human tasks simultaneously demands greater clarity about human values, purposes, and flourishing.

This isn't philosophical navel-gazing. In a world where algorithms make countless organizational decisions, decision quality depends fundamentally on the quality of Purpose, Vision, Mission, Values, and Principles embedded in digital systems. Companies treating PVMVP narratives as afterthoughts or marketing exercises do so at their peril.

Organizations thriving in the AI age will:

Articulate clearly why they exist, where they're going, what they do, what they stand for, and how they act on those beliefs

Embed PVMVP frameworks authentically into operations, both human and artificial

Use technology to amplify positive impact rather than simply maximizing narrow metrics

Create shared operating systems guiding both people and machines toward common destinations

Maintain human stewardship over difficult questions while leveraging AI capabilities

AI is forcing confrontation with questions we've often avoided: What are we actually trying to accomplish? What kind of world are we building together? How do we ensure our most powerful tools serve our highest purposes?

Companies figuring this out won't just survive the AI revolution—they'll help ensure it serves human flourishing rather than undermining it. The imagined North, East, South, and West are waiting. Whether your organization is ready to navigate by its own compass remains an open question.

Jurgen

 •  0 comments  •  flag
Share on Twitter
Published on November 19, 2025 10:21

November 13, 2025

The Future of Consulting

Business consultancies face the same dramatic disruption as travel agencies.ChatGPT nearly stranded us in the mountains of Peru.

My husband's AI-generated itinerary looked good on-screen: departing from Puno at Lake Titicaca and then driving through Peru's spectacular southern terrain, with magnificent vistas across technicolor mountains and distant volcanoes. We'd never seen such geological variety—different rocks, different sands, different mountain forms, sometimes teeming with life, sometimes desert-dead.

The afternoon drive through Colca Canyon was pure magic. Deeper than the Grand Canyon and—mercifully—without the tourist hordes. But after sunset, our road trip took a dark turn.

ChatGPT's clever route, dutifully translated into Google Maps' "quickest route," demanded another five hours to reach Camana on Peru's coast. Under normal circumstances, no problem. But the AI's itinerary sent us through the Andes mountains, over rocky and sandy roads, in pitch darkness, without internet, and with nobody else around. We couldn't even see the moon.

No tourist in their right mind takes a mountain road trip after sunset with no guide, no connectivity, no moonlight, and no experience. Doable? Sure. Wise? Absolutely not. As I wrestled our SUV through slippery sand, with a perilous decline beside us, the continuous anxiety of getting stuck ranked among our vacation's least enjoyable moments.

Four hours into this hair-raising drive, we hit a construction roadblock. Two steamrollers completely blocked the road. 😱

Surveying the scene at 9 PM, with only the stars as our witnesses, one thought consumed me: "Oh, my God. We're stuck for the night."

How We Got Advice Before

Agile coaches and business consultants should pay attention to how the travel industry was disrupted.

Twenty-plus years ago, we paid travel agents—let's call them consultants—to plan our vacational transformations through Brazil, Cuba, and Chile. They handed us brochures (think white papers by travel experts) and booked hotels, train tickets, and guided tours months in advance. We browsed travel blogs and stories (basically case studies) to prepare. We even attended holiday fairs—imagine conferences packed with coaches and consultants—where agents and vendors displayed their many options.

Once traveling, we carried Lonely Planet or Rough Guides (books stuffed with expert advice) that we consulted dutifully for tips on hotels, restaurants, cafés, sights, and activities. We also lugged geographical maps, translation dictionaries, and other paper tools (the travel equivalent of planning poker cards, sticky notes, and business model canvases) to navigate unfamiliar environments. For questions, we had to rely on locals through elaborate hand gestures and confused faces.

Like memories of earlier vacations, this antiquated travel approach has vanished completely.

How We Get Advice Now

These days, when we explore Canada, Japan, the Caribbean, or—like this year—Peru, everything has changed.

For high-level, timeless advice like cultural expectations and optimal travel seasons, we might consult a travel book. Influencer vlogs and travel stories occasionally provide useful destination overviews. But for concrete, context-dependent advice, online tools win every time.

We use Booking for accommodations, comparing pricing, locations, and reviews. Google Maps finds the best-rated cafés and restaurants. For attractions and activities, we query ChatGPT, Perplexity, or TripAdvisor. Our questions about history, culture, nature, and geography go directly to the AIs. And GetYourGuide and Withlocals handle guided tours and tickets whenever we want. (I won't mention TikTok or Instagram because we're ancient, but I hear that younger travelers mine these platforms for instant tips and trends. I satisfy my caffeine needs with Google.)

Then there are the digital conveniences that didn't exist twenty years ago. Airalo's virtual SIM cards provide cheap internet access anywhere (except in the Andes Mountains, apparently). WhatsApp connects us directly with hotels, hosts, guides, and ticket offices. And Google Translate enables us to talk with virtually anyone.

Communication barriers have evaporated, along with traditional travel agents.

The Future of Coaching and Consultancy

The consultancy industry is following the travel agent playbook, whether it likes it or not.

Future organizations won't need travel agents (consultants), brochures (white papers), travel stories (case studies), or travel fairs (conferences). For concrete, contextual, and time-sensitive advice, online tools are faster and easier. Business transformation now demands a level of speed and adaptability that's impossible to achieve with generic, outdated knowledge trapped in consultants' heads.

Business transformation now demands a level of speed and adaptability that's impossible to achieve with generic, outdated knowledge trapped in consultants' heads.

Why rely on one travel agent's stale advice on accommodation, food, and events in Lima when ChatGPT, Perplexity, Booking, Google Maps, and TripAdvisor live on your phone 24/7? Business consultancy faces the same disruption. As AIs acquire more context about companies, people, and environments, they'll deliver concrete, accurate, real-time advice that no human coach or consultant can match.

Knowledge is now a commodity.

I will make myself available in 2026 for advisory positions at a handful of startups and scaleups. Let me know if you're interested. I will spend most of my time blending timeless (human) wisdom with contextual (machine) advice. If that's want your business needs, I'm here for you.

Soon, each company will deploy a legion of digital consultants for employees' every need. Much cheaper, too. And without the method wars and thought leader drama.

Contextual Tips Versus Timeless Advice

Does this doom consultancy entirely? Not quite. But the industry faces severe disruption.

Digital technologies transformed travel agents by shifting bookings online and reducing the market share of traditional agencies. Online platforms offer convenience and price transparency, forcing agents to adapt with digital tools and specialize in luxury, corporate, or complex travel. A Barbell Effect emerges: large online platforms dominate mass-market bookings, while niche agents survive through high-touch, personalized services. Generalist traditional agencies decline, caught in the middle. The industry polarizes between digital giants and specialized experts, with successful businesses embracing technology and focusing on high-value niche offerings in a competitive, evolving landscape.

During our Peru trip, we still hired local experts: guides for tours around Machu Picchu, Lima, Arequipa, and Lake Titicaca. Sure, we could have wandered around with ChatGPT whispering in our earbuds. But having a knowledgeable local human show you around wonderful, unfamiliar environments is irreplaceable. No LLM can enhance a travel experience with personal, improvised anecdotes about Peruvian life.

Agile coaching and business consultancy will follow the same pattern.

Organizations worldwide are embracing cheaper online platforms for personalized, contextual digital transformation tips: high-volume advice at dramatically lower costs. Machines will increasingly guide workers through continuous change, offering highly specific patterns, practices, tips, and tricks that make sense in the moment, tailored to personal preferences and context in ways no agile framework, method, coach, or consultant ever could. Each worker gets their personal ChatGPT + Google Maps + TripAdvisor for business environments.

Machines will increasingly guide workers through continuous change, offering highly specific patterns, practices, tips, and tricks that make sense in the moment, tailored to personal preferences and context in ways no agile framework, method, coach, or consultant ever could.

But humans remain essential. Some coaches and consultants will survive by providing high-touch, personalized (and high-margin) services for organizations seeking timeless advice on top of the tsunami of contextual tips and practices from tools.

Someone must tell them when to use which tools. And someone must tell them when to ignore them entirely.

The Barbell Effect, Again

The Barbell Effect applies here too. On one end, AI will handle the bulk of contextual, tactical advice—the equivalent of "best restaurants near me" or "how to run this retrospective." Cheap, instant, personalized. On the other end, human experts will command premium prices for strategic wisdom—the equivalent of "Should you even have this retrospective?" and "What does success actually look like for your business?"

The middle ground—where most consultants currently live—is disappearing fast. Generic frameworks, templated solutions, and one-size-fits-all methodologies are becoming commoditized by AI. The consultants who survive will be those who can either specialize in high-stakes, high-touch strategic guidance or find ways to enhance and orchestrate AI-driven solutions rather than compete with them.

The travel industry's transformation offers a blueprint. Successful travel agents didn't fight the digital revolution—they rode it. They identified what humans do better than algorithms (relationship-building, complex problem-solving, emotional intelligence, ethical judgment) and doubled down on those capabilities.

Smart consultants will do the same. They'll become curators of AI-generated advice, helping organizations navigate the overwhelming flood of contextual recommendations. They'll focus on the meta-questions: which problems are worth solving, what trade-offs matter most, and how to design the organization for continuous adaptation.

When there's nobody around to make wise decisions, a succesful business might end up as a historical footnote.

The Wisdom That Tools Can't Provide

As I tried hard not to freak out, mentally calculating the amount of water we'd have to share throughout the night, my husband accidentally spotted a somewhat hidden "detour" sign with his phone's flashlight on the road's shoulder. There was a way out!

We finally continued our Andes escape through a maze of temporary roads and construction zones. We still lacked internet, and Google Maps was useless, showing our position on a grid of roads that was possibly planned but entirely nonexistent. But we made it.

Pure luck and intuition (with minimal wisdom) extracted us from our predicament. We reached our hotel in Camana at 11 PM, dinnerless but educated. Road 109 remains etched in our memories forever.

Our tool reliance nearly stranded us in Peru's mountains. Someone should have warned us against driving mountain roads after sunset with no guide, connectivity, moonlight, or experience. We could have better examined the terrain Google Maps was suggesting to us rather than mindlessly following its recommendation. We could have informed ourselves about the wisest choice among our options. We might even have paid for that guidance.

I will make myself available in 2026 for advisory positions at a handful of startups and scaleups. Let me know if you're interested. I will spend most of my time blending timeless (human) wisdom with contextual (machine) advice. If that's want your business needs, I'm here for you.

The future belongs to coaches and consultants who can tell organizations when to trust their tools and when to ignore them completely. Sometimes the wisest route isn't the fastest one, and sometimes almost getting lost teaches you more than any perfectly planned journey ever could.

Just ask anyone who's driven Peru's Road 109 in the dark.

Jurgen

 •  0 comments  •  flag
Share on Twitter
Published on November 13, 2025 03:31

November 4, 2025

The Four Tensions of Sociotechnical Systems

From Political Compass to Viable System Model: Autonomy, Control, Adaptation, and PurposeEvery team faces the same fundamental question: How do you balance freedom with coordination, spontaneity with discipline, performance with agility, and self-actualization with shared purpose?

Is your team burning sprints on coordination theater while the actual work is not getting done? Is management swinging between micromanagement fascism and leadership anarchy? Is your organization building a surveillance architecture or flying blind through market changes? And do your company's shared values feel like a warm group hug or more like a corporate straightjacket?

Welcome to the eternal balancing acts that determine whether your sociotechnical system thrives or dies.

Note: I'm writing this from poolside at my hotel in Arequipa, Peru, day six of what should be a blissful vacation. Any rational person would ignore work completely and focus on their pisco sours. But when you stumble onto a mental model that cuts through decades of political and organizational nonsense, rationality takes a backseat to intellectual excitement. I just need to share this while sipping my chicha morada.

The Simplest Tension: Left vs. Right

Relax. This is not a political commentary disguised as management theory.

But to understand why most organizations are trapped in simplistic thinking about systemic tensions, we need to acknowledge how badly we've been indoctrinated by political discourse. My country, the Netherlands, just survived another election cycle (with results that made international headlines—more on that another time). And the stories in the media were as simplistic as ever.

The media loves its binary narratives:

Left = collective welfare, equality, coordination. When it goes wrong: anarchist chaos or communist delusions.

Right = individual freedom, self-reliance, autonomy. When it derails: authoritarian nightmares or fascist fantasies.

Peru offers an interesting example. Over the last twenty years, the country has ping-ponged through governments, mostly center-right to right-wing, with brief leftist experiments like Pedro Castillo's spectacular 2021-2022 flameout. Alan García and Ollanta Humala started left but governed center. The pattern is that traditional institutions stay conservative, the left can't hold power, and the political landscape fragments into right-leaning chaos. At least, this is what the AIs have told me. They know more than I do.

The traditional left-right framework is the stone axe of political analysis. It's a one-dimensional tug-of-war that might have worked when societies were simpler, but applying it to modern sociotechnical systems is like measuring the finish times of Olympic athletes with a sundial.

The Political Compass

American activist David Nolan ripped the traditional model apart in 1969. His alternative chart revolutionized political mapping by adding a second dimension—separating economic freedom from personal freedom. Later, this spawned models like the Political Compass, which plots ideologies across economic and social axes, capturing the nuances of authoritarianism and libertarianism that linear scales completely miss.

The two axes changed everything:

Economic (Left–Right): market freedom versus state intervention

Social (Authoritarian–Libertarian): personal freedom versus social control

Suddenly, the world gained depth. You could be economically conservative but socially liberal. Or economically progressive but socially authoritarian. The binary became a matrix—four quadrants instead of two camps.

But for organization designers like me, there's a problem.

These frameworks are soaked in political baggage. Drop terms like "liberal," "socialist," "libertarian," or "conservative" into any conversation and watch people's brains shut down as tribal reflexes take over and the excrement hits the ventilator. Decades of ideological warfare have weaponized these words beyond any analytical usefulness.

Since I'm not writing political propaganda, I needed something cleaner.

The Sociotechnical Compass

Working with ChatGPT, I stripped away the political theater and reframed the axes for sociotechnical systems—teams, businesses, organizations, and other technology-enabled human networks that actually get stuff done. Following the POSIWID principle (the Purpose Of a System Is What It Does), my "Sociotechnical Compass" describes not ideology (what people believe) but systemic behaviors (what people actually do).

X-axis: Individualist ↔️ Collectivist

The eternal struggle between autonomy (local adaptation, individual agency) and coordination (shared standards, collective action). How much can group members freelance versus how much must they conform to group expectations around communication, coordination, and collaboration?

Y-axis: Experimental ↔️ Controlled

The balance between experimentation and stability, between spontaneity and discipline. How much can group members improvise versus how much will they be constrained by rules, policies, standard operating procedures, and "knowing their place in the system"?

This de-politicized compass describes how teams and organizations actually behave, using language that won't trigger anyone's ideological immune system. Nobody wants to be labeled a socialist or authoritarian just because they think a bit of coordination or governance might help their organization survive in a chaotic world.

Do you like this post? Please consider supporting me by becoming a paid subscriber. It’s just one coffee per month. That will keep me going while you can keep reading! PLUS, you get my latest book Human Robot Agent FOR FREE! Subscribe now.

The Viable System Model (VSM)

While discussing my Sociotechnical Compass with ChatGPT, I had one of those moments that almost made me drop my cup of queso helado.

Without realizing it, I had recreated the foundational structure of Stafford Beer's Viable System Model (VSM)—the cybernetic blueprint of any living, self-regulating system.

The VSM describes five levels that all teams and organizations must implement to remain viable:

Operations - Do the actual work. (The part of the organization that creating actual value.)

Coordination - Prevent conflicts between units. (Manages interdependencies and shared resources.)

Optimization & Control - Allocate resources & enforce rules. (Ensures coherence, efficiency, and accountability.)

Intelligence & Adaptation - Scan the environment & adapt. (Anticipates change, learns, and innovates.)

Identity & Culture - Define purpose & identity. (Provides long-term direction, meaning, and belonging.)

(For a larger discussion of the VSM, check out: " Let's Start from Scratch: Organizational Design in the Age of AI .")

In essence:

Level 1 is pure operational anarchy—work getting done without interference.

Levels 2-5 are the constraints that keep the system viable through collaboration, governance, adaptation, and purpose.

Four Kinds of Delegation

Let's talk about viable teams without the management consulting mysticism.

In a perfect world, teams would spend 100% of their time creating value for stakeholders. Pure operations, zero waste, maximum output and impact. But teams learned long ago that pure operational focus is a fast track to extinction. They won't remain viable systems for long.

First, teams figure out quickly they need coordination. Someone has to agree on what work gets done, why, who does it, and when. This shifts capacity from VSM Level 1 (operations) to Level 2 (coordination).

For example, I'm traveling Peru with my extended family. Most of the time, we do whatever we want—reading, writing, shopping. But periodically, we coordinate: which tours to take, where to eat, time of breakfast, and who gets to walk around with the credit cards. That's classic Level 2 coordination work.

Second, larger teams realize that not everyone behaves optimally all the time. Humans are fallible and need occasional nudging to follow the rules. And thus, social systems tend to allocate capacity to VSM Level 3: optimization & control. Someone manages resources, provides servant leadership, or enforces command-and-control.

For example, Lima traffic is an adrenaline rush that demonstrates what happens when a sociotechnical system allocates almost zero resources to Level 3 optimization & control. It's rather exhilarating and very educational.

Third, smart teams understand that environmental blindness kills systems. They need awareness of trends, changes, and whatever else might impact customer needs tomorrow. They delegate time to VSM Level 4: intelligence & adaptation, continuously inspecting and adapting to the environment. (Agile, anyone?)

Fourth, successful teams recognize the power of cohesion—shared purpose, values, identity. Systems with strong collective identity outlast fragmented ones. They invest in VSM Level 5: identity & culture.

For the record, the exact delegation mechanism doesn't matter. Everyone might spend 5-10% of their time on coordination, or one person gets dedicated control responsibilities to do this kind of work. Implementation varies. Nobody cares. What matters is that the system allocates capacity across all VSM levels in the way it sees fit.

Four Tensions Between Chaos and Order

After realizing I'd accidentally recreated the VSM's foundational structure (or at least the first three levels of it), I understood what this model actually describes: the multiple balancing acts necessary in every sociotechnical system.

Each level above the operational level adds a different type of constraint—necessary tension that keeps the system alive. No constraints equals anarchy. And anarchy, despite libertarian fantasies, rarely works. Even the most ardent freedom advocates grudgingly admit that some rules need to exist.

My extended Sociotechnical Compass now identifies four distinct balancing acts between System 1 (operations) and its regulating levels:

Autonomy vs. Coordination (Individualist ↔️ Collectivist)
Balancing part freedom with whole needs: Level 1 vs. Level 2

Spontaneity vs. Discipline (Experimental ↔️ Controlled)
Balancing improvisation with governance: Level 1 vs. Level 3

Performance vs. Agility (Optimized ↔️ Adapted)
Balancing excellence with adaptation: Level 1 vs. Level 4

Self-actualization vs. Resilience (Flexible ↔️ Coherent)
Balancing individual growth with shared purpose: Level 1 vs. Level 5

(Sorry, I have no time to draw a pretty picture of this model. I'm trying to enjoy Peru.)

Each dimension represents a different conversation between operations and its meta-levels:

Level 1 says: “You create value.”

Level 2 says: “You work together.”

Level 3 says: “You play by the rules.”

Level 4 says: “You adapt and evolve.”

Level 5 says: “You share one purpose.”

These same four balancing acts play out in every society: How much do we allow collectivism to override individualism? How much authoritarianism should constrain spontaneity? To what extent can surveillance trump privacy? And how much collective purpose may supersede individual self-actualization? Sadly, media commentators reduce this rich, multidimensional reality to the braindead narrative of "left versus right" because complex narratives don't attract eyeballs nor do they sell advertisements.

Do you like this post? Please consider supporting me by becoming a paid subscriber. It’s just one coffee per month. That will keep me going while you can keep reading! PLUS, you get my latest book Human Robot Agent FOR FREE! Subscribe now.

The Moral of This Framework

Sociotechnical system viability—whether teams, organizations, or societies—isn't a destination. It's a perpetual dance between anarchy and control, between chaos and order.

Economists claim that a "healthy" percentage for government expenditure as a share of GDP commonly falls in the range of 30–40% in developed nations, with many economic studies suggesting that the optimum for supporting sustainable economic growth is often below 40% and ideally closer to 25–35%. Likewise, healthy teams should probably dedicate 25-40% of their total capacity to coordination, control, intelligence/adaptation and identity/culture work (VSM levels 2-5). Everything else should be pure operations and value creation (VSM level 1). The specific allocation is context-dependent and subject to ongoing social, political, and even philosophical negotiation.

Over coffee yesterday, our tourist guide in Arequipa shared stories about daily life challenges for most Peruvians. While my family enjoys incredible sights, foods, and experiences, we're acutely aware of our privilege. Most Peruvians can't afford these luxuries. Peru's Sociotechnical Compass has unresolved tensions across all four dimensions, and we hope the people find their path through these inevitable balancing acts.

There is no simple left or right in politics and there is definitely no left or right in organizational design. There are only systems that must balance four essential tensions to remain viable. The organizations and societies that master this dance survive. The ones that don't become historical lessons for business professionals and inspiring stories for spoiled tourists. The face of the Earth is littered with countless examples.

We'll arrive at Machu Picchu tomorrow.

Jurgen

P.S. You can follow my trips on Polarsteps.

 •  0 comments  •  flag
Share on Twitter
Published on November 04, 2025 04:33

October 28, 2025

My 18-Month Battle With Expert Hallucinations

A cautionary tale of how both AI and human experts can be completely wrong

After 18 months of failed treatments and contradictory advice, I discovered something shocking: expert hallucination isn't limited to artificial intelligence. Sometimes, human experts are even worse at seeing what isn't there.

For three years, I was a running machine. 2,500 kilometers per year, 50 kilometers per week … rain or shine. Consistent as the Swiss train schedule.

Then, by the end of 2023, my body staged a coup.

A deep, nasty pain took up residence in the top of my left leg, high in the buttock region. The cause seemed blindingly obvious: I'd finally overdone it. My hamstring was giving up.

ChatGPT agreed. "Hamstring tendonitis," it declared with its usual confidence. Inflamed tendon. Classic overuse injury. Time to rest, buddy.

So I rested. And rested. And rested some more.

What followed was twelve months of the most maddening Groundhog Day imaginable. I'd take a break for months, feeling the pain fade like a bad memory. And then, believing I'd sufficiently healed, I'd lace up my running shoes and within five kilometers, my hamstring would remind me exactly why optimism is for suckers.

Back to the AI oracle I went. ChatGPT, Claude, whoever would listen—they all sang the same gospel: "hamstring tendonitis." Poor blood flow to tendons, they explained. Healing takes time. More rest. Always more rest.

I'd had this exact problem eight years earlier with the other leg. The advice of the sports doctor back then was: rest a few months and do some exercises to strengthen the weak muscles. The pattern was clear, the solution obvious, the outcome inevitable: another couple of months of doing absolutely nothing while my fitness evaporated like morning dew in the Caribbean.

By early 2025, after a full year of this medical Möbius strip, I finally cracked and consulted an actual human, expecting to hear the same mantra I'd heard eight years before. But this time, I consulted a certified physiotherapist specializing in manual therapy.

After examining me and painfully prodding my posterior until I questioned both his qualifications and his humanity, he delivered a diagnosis that would have made Freud proud: there was nothing wrong with my body. The problem, he announced, was in my head.

If I could run 2,500 kilometers per year, he argued, why would my body suddenly throw a tantrum over a few measly kilometers? He started asking the important questions: What happened in 2023? Any stress? Work problems? Relationship drama? Uncertainty about the future? Was I perhaps subconsciously projecting anxiety onto my glutes?

I'll give him points for creativity, but the whole thing reeked of hammer-and-nail syndrome. My friends call me a "mental flatliner"—so emotionally stable you could calibrate seismographs off my mood swings. More importantly, his psychosomatic theory couldn't explain why eight-hour flights and twenty-hour car rides turned my left buttock into a full-time complaints department.

That's when I realized his tell. Sure enough, part of his credentials was the fact that he wrote an entire book about psychosomatic injuries. Every client walking through his door was apparently a potential case study for his pet theory. When your favorite tool is a psychological hammer, every client's aching body part looks like a repressed emotion.

I cancelled my remaining appointments.

So there I was, trapped between two brands of bullshit. The AIs insisted on rest while my tendon stubbornly refused to heal. The human expert wanted to psychoanalyze my butt. Neither approach was working, and I was getting tired of hobbling around with a sore arse.

It was time for some good old-fashioned rebellion against expert opinion.

What if we all had it backwards, I thought? What if, instead of starving my injury of activity, I fed it just enough to wake it up? Blood flow problems? Maybe some light running could help circulation. Instead of complete rest, what if I stayed active just enough to help my body to heal, but not enough to make things worse?

I started small. Tiny runs—2-3 kilometers, a few times per week. I was aiming for the Goldilocks zone: just enough stimulus to nudge the healing process without triggering another cycle of pain and frustration. The approach felt like walking a tightrope while juggling, but I was done trusting the certainties of experts (human and AI alike) about my body.

Six months later, the injury has practically vanished. I'm back to 25 kilometers per week and can run over ten kilometers without my hamstring filing a complaint. The pain that has haunted me for a year and a half has disappeared like a politician's promises after election day.

Curious about what had actually worked, I fed the entire saga to Gemini for a post-mortem analysis. The answer was both illuminating and infuriating.

I'd never had "tendonitis" in the first place, Gemini said. Neither the AIs nor the human expert had bothered to consider the obvious alternative: proximal hamstring tendinopathy (PHT)—a condition involving degeneration and failed healing in the tendon, not inflammation.

The distinction matters. Inflammation responds to rest. Degeneration requires the opposite. My pain from sitting was just my body weight compressing the damaged tendon against my sit-bone, not some mysterious psychosomatic manifestation of workplace stress.

The earlier AIs' advice to "rest" had created a self-reinforcing cycle of weakness. Every break kept my tendon weak and fragile, turning each return to running into a guaranteed re-injury. I was caught in a "rest-weaken-re-injure" loop that could have continued indefinitely if I'd kept following that logic.

The human expert, meanwhile, was so busy looking for psychological nails that he'd missed the mechanical hammer entirely. One-tool experts are dangerous precisely because they're so confident in their singular domain. They don't see problems—they see confirmation of their worldview.

My "Goldilocks" rebellion wasn't just lucky guesswork. I'd accidentally stumbled onto the gold-standard, evidence-based treatment for tendinopathy: progressive loading. Those modest 2-3 kilometer runs weren't just maintaining fitness. They were applying mechanical load that signaled my tendon cells to rebuild stronger and more organized tissue. I'd ignored both artificial and human intelligence to accidentally discover the correct treatment through pure stubbornness and a refusal to accept expert consensus.

But the real lesson isn't about running injuries or hamstring tendons. It's about the dangerous mythology we've built around expertise in the age of AI.

We've created a false dichotomy: either trust the machine or trust the human. But both can be spectacularly wrong, often for similar reasons. AIs are pattern-matching machines trained on existing knowledge, which means they'll confidently regurgitate conventional wisdom even when it's outdated, incomplete, or simply mis-categorized. Human experts, meanwhile, are walking bundles of cognitive bias who see their specialty everywhere they look.

Do you like this post? Please consider supporting me by becoming a paid subscriber. It’s just one coffee per month. That will keep me going while you can keep reading! PLUS, you get my latest book Human Robot Agent FOR FREE! Subscribe now.

The AIs failed because they were trained on medical literature that treats tendonitis and tendinopathy as similar things, despite research showing they're different conditions requiring opposite treatments. They gave me the statistically most likely diagnosis based on a mix-up of categories.

The human expert failed because he'd found his intellectual home in psychosomatic medicine and wasn't about to let a simple mechanical injury ruin his favorite narrative. When you've written a book about minds creating body problems, every body problem looks like evidence of a mental cry for help.

Both human and AI failed because they were too confident in their knowledge and too incurious about alternative explanations. The AIs couldn't think beyond their training data. The human couldn't think beyond his pet theory. Meanwhile, the solution required neither artificial intelligence nor human expertise—just careful attention to what was actually happening, a willingness to experiment, and enough intellectual stubbornness to question received wisdom.

This pattern repeats everywhere. In business, we see the same dynamic between AI-powered analytics and human "domain experts." The algorithms confidently extrapolate from historical patterns, while the humans confidently apply their favorite frameworks. Both miss the messy reality that doesn't fit their thinking models.

The important skill in the future of work isn't choosing between human and artificial intelligence. It's knowing when to ignore both. Sometimes the smartest move is to tune out the experts, human plus machine, pay attention to what's actually happening, and run your own experiments. It has the curse of knowledge written all over it. Sometimes an amateur with fresh eyes accidentally discovers what the experts' training prevents them from noticing.

Don't get me wrong—I'm not advocating for wholesale rejection of expertise. When you're building a bridge or performing surgery, you want people who've spent decades learning their craft. But when you're dealing with complex, socio-technical problems that don't fit neat categories, expertise can be a trap.

My hamstring is now happily carrying me through 25-kilometer weeks, but the real victory was learning to trust my own observations and reasoning over pet theories. Sometimes the best expertise is knowing when to ignore the experts. The future belongs to those who can navigate between overconfident AI and overconfident humans, using both as sources of insight while trusting neither as oracles. Sometimes the wise choice is to ignore the wise and trust your own stubborn curiosity.

Jurgen

 •  0 comments  •  flag
Share on Twitter
Published on October 28, 2025 02:26

October 24, 2025

You Cannot Please Everyone

Those who criticize AI-assisted writing are snobbish fools failing to grasp the idea of a target audience.

The only good reason for writing posts complaining about other people's usage of AI is when your audience consists of snobbish, patronizing fools.

I recently watched an animated TV show with my ten-year-old friend. It was horrible. The animations were cheap; the voice acting was cringy, and the plot was nonexistent. But guess what? None of that mattered because the show was all about tanks. My ten-year-old buddy loves tanks. And when a TV show has a tank in every animated scene, it's a f**king great show.

Here are a few more tanks:

Now, I could go online trashing this TV series for the horrendous quality of its animations. But what would be the point? I'm not their target audience. I'm not a ten-year-old obsessed with tanks. If I slam the producers for their crappy product, I only expose myself as a snobbish, patronizing fool.

This morning, my hubby made a similar observation about all-you-can-eat buffets. We despise everything about them: the industrial-grade slop, the heat lamps slowly destroying what might have once been food, the whole depressing spectacle. But our disgust is completely irrelevant because we don't eat there. Millions of people apparently enjoy these nutritional wastelands, and good for them. I'm not joining their feast, but I'm also not wasting energy condemning their choices.

In the age of artificial intelligence and robotics, it's crucial to remind ourselves: only our target audience counts. Nobody else matters. We cannot please everyone. It's the same for any other product we make.

I vividly remember my first critical book reviewer. I was about to publish my first book, Management 3.0, and my publisher, Addison-Wesley, had assigned a few reviewers to evaluate my manuscript. Most reviewers loved it. One person hated it. The hater was a person with an academic background who rated the many jokes in my book as "highly unprofessional." But the publisher said it didn't matter. "He's not your target audience." Fifteen years later, the book has sold 70,000 copies and is regarded by many as a classic. It just has made no waves in academia, and that's perfectly fine.

The AI writing hysteria follows the same pattern, except with more sanctimony and less self-awareness.

Every week, some writer publishes another hand-wringing essay about AI "polluting" the literary landscape. They're horrified that ChatGPT might be ghostwriting someone's newsletter, or that Claude helped craft a blog post. They speak of authenticity and craftsmanship as if they're defending the last monastery from barbarian hordes.

But unless they're consuming that AI-assisted content, their outrage is performative nonsense.

Only your target reader decides what's brilliant. Nobody else matters. You cannot please everyone.

More tanks:

Maybe your readers demand prose crafted with the delicacy of a Swiss watchmaker. If so, stay far away from AI. Your audience might sniff out synthetic sentences faster than a bloodhound. Or maybe your readers just want information delivered efficiently. In that case, milk every AI tool available because speed and volume matter more than artisanal sentence structure.

In my case? I write for people who like challenging, controversial viewpoints presented to them as an enjoyable read. That means I take the middle road: each post (including this one) starts as a fully hand-written draft because it's important that the core message comes from me. Then, I might turn to ChatGPT, Gemini, and Claude for help in additional research, feedback on structure, an occasional sentence-level rewrite, and feedback on style. I even have Claude checking my posts against my ethical values. But every post ends with my final polish. I want every essay to be in my tone of voice. The average long-form post on Substack costs me about eight hours of work. Without AI, it would probably be more like sixteen. I consider that a win because I can offer my readers twice more of what they like.

And only the reader decides if I did a decent job offering them food for thought as an enjoyable read. Nobody else. I cannot please everyone.

Do you like this post? Please consider supporting me by becoming a paid subscriber. It’s just one coffee per month. That will keep me going while you can keep reading! PLUS, you get my latest book Human Robot Agent FOR FREE! Subscribe now.

Don't misunderstand me; it does make sense to evaluate and discuss the ethical side of production processes of any kind, including writing. In the case of animated TV shows and food production, we might discuss worker rights or animal welfare. With AI used in writing, we should discuss the sustainability, copyright, and employability problems that are part of the AI revolution. All of that is fair game for debate.

But not the pointless whining about machines and algorithms.

It makes little sense for people to complain about other writers using AI in their writing. If they're not the target audience, their opinion of the quality of writing is irrelevant. Judging the creative process of something they're not even consuming makes them look like a snobbish, patronizing fool. The only good reason for writing posts lamenting other people's AI usage is when their own audience consists of snobbish, patronizing authors. They will happily lap it up.

Don't be like them. Go ahead and use AI in your writing when it serves your audience. If all your readers want is ridiculous stories about tanks, more stories about tanks is good. Endless stories about tanks is even better. Grammar, style, and plot are all irrelevant as long as every paragraph has at least one reference to a tank.

I've sold 200,000 books across multiple publishers and worked with more developmental editors and copy editors than I can count. Every professional I've encountered agrees on this fundamental principle: your target user is the only judge that matters. Everyone else is irrelevant background chatter.

As a creative (or any kind of product designer for that matter) remember that you cannot please everyone.

Jurgen

P.S. Here are some more tanks:

 •  0 comments  •  flag
Share on Twitter
Published on October 24, 2025 05:54

October 20, 2025

The Stochastic Parrot vs. the Sophistic Monkey: I Trust AI More Than Most Humans

In a world drowning in human bias and misinformation, artificial intelligence offers surprisingly reasonable answers.

I trust AI more than most humans. That's not because the average machine is so brilliant. It's because the average human is so stupid.

Many humans—bless their hearts—are freaking out about "AI hallucinations." As if humans don't confidently hallucinate entire religions, belief systems, conspiracy theories, and process improvement frameworks every single day. Compared to the average human's cocktail of bias, ignorance, and misplaced confidence, these AIs are practically philosophers.

Asking the Machines

Let me give you an example.

The other night, I had a cosmology question—one of those "wait, how does this make sense?" moments that Stephen Hawking probably would have rolled his eyes at. I've been diving into books such as A Brief History of Time, The Fabric of the Cosmos, and The Inflationary Universe—you know, light bedtime reading—and I stumbled over a paradox:

If, according to Einstein, all motion is relative, then how the hell can scientists talk about the absolute velocity of Earth, our solar system, or even our galaxy when measured against the radiation of the cosmic microwave background (CMB)?

Now, if I had thrown that question into my local neighborhood group chat or onto Facebook or TikTok, the odds of getting a coherent answer would have been roughly the same as finding intelligent life on Mars. My friends? Predictably useless. My family? Adorably clueless. And as for the likes of scientists such as Brian Cox or Neil deGrasse Tyson, they're tragically unavailable for clearing up my late-night cosmological quandaries.

So, I did what any 21st-century human does when they seek knowledge without judgment: I consulted an LLM. And it must be said, Gemini delivered the answer I craved: simple, clear, and with neither a confused chuckle nor a condescending sigh. I won't bore you with the physics, but let's just say I found the explanation accurate and elegant.

That's become the pattern lately. Whether it's cosmology, philosophy, complexity, or just the dread of discovering a suspicious-looking spot on my shoulder, the answers I get from AIs are consistently better (or maybe I should say less wrong) than whatever I'd get from the average human in my social circle.

"The answers I get from AIs are consistently less wrong than whatever I'd get from the average human in my social circle."

Bias and Hallucinations in AIs

On the social networks, whenever I talk about how useful AI can be, the usual suspects show up: the technophobes, the digital doomsayers, the self-appointed prophets of "responsible AI." They clutch their pearls and remind me, in the same tone people use to warn everyone about social media rotting our brains and computer games ruining teenagers, that "the machines are biased" and "they hallucinate," followed by the inevitable: "They're just stochastic parrots!" As if that phrase alone grants them a PhD in smugness.

Sure, they're right, technically speaking. Large Language Models are biased, fallible, and overconfident. I've often said LLMs are like politicians: they always have an answer, they crave everyone's approval, and they can be spectacularly wrong with stunning confidence. The average political leader is practically a walking AI, minus the energy-guzzling data center.

"LLMs are like politicians: they always have an answer, they crave everyone's approval, and they can be spectacularly wrong with stunning confidence."

That's exactly why we keep raising the bar for the machines. Every benchmark, safety layer, and alignment model is our way of teaching them a shred of epistemic humility. We audit their biases, fine-tune their judgment, and retrain them endlessly. Unlike the architecture of the human brain, AIs are continuously improved. Their flaws aren't natural failings—they're engineering challenges. And we're getting better at solving them every single day.

I wish we did the same in politics.

Bias and Hallucinations in Humans?

While we're busy auditing and benchmarking algorithms for micro-biases and hallucination rates, nobody is willing to raise the intellectual bar for humans to a similar level of coherence.

"Nobody is willing to raise the intellectual bar for humans to a similar level of coherence."

We seem perfectly fine letting our un-updatable human brains run wild in our social echo chambers. We nod along to friends, family, and followers who still think "doing my own research" means watching two YouTube videos and a handful of funny memes.

We hang onto every word of politicians and social media influencers who couldn't explain the difference between gender and biological sex if you offered them a coupon for a lifetime of free marketing campaigns; who think import tariffs are a punishing fee for foreign governments rather than a tax on their own fellow citizens; and who genuinely believe that immigration, not incompetence, is the root cause of all national misery.

Stupidity is an understatement.

If I ever want to experience real, high-octane nonsense, I just open the news to see what the humans are talking about. There I'll find elected officials proudly denying climate science, mangling economic laws and principles, and sprinkling conspiracy theories like parmesan on a pile of policy spaghetti. Here in the Netherlands, we're in election season again, because the last government was so hopelessly incompetent it couldn't even agree on which brand of bullshit to peddle to their voters.

And if that's not enough, there's always the digital carnival we attend every day: Facebook groups, TikTok videos, and Telegram channels where people pass around misinformation like a bowl of free M&M's—colorful, addictive, and most definitely bad for our health.

So yes, AIs hallucinate. But for Olympic-level delusion and disinformation, we don't need machines. We've already got plenty of humans debating each other on talk shows and at dinner tables.

"For Olympic-level delusion and disinformation, we don't need machines. We've already got plenty of humans debating each other on talk shows and at dinner tables."

Social Media Debates

And on social media, of course.

Every time I post something positive about AI, the comments section quickly fills up with people auditioning for a Logical Fallacies 101 textbook.

First comes the Straw man fallacy: "So you just do whatever the AI tells you?" No. I still have my own brain and a healthy dose of critical thinking. I do whatever makes the most sense to me.

Then the Slippery slope crowd warns me that asking Gemini a question about cosmology could somehow end with robots ruling the planet. That's a bit like saying two chocolates a day will make you obese.

Next up, the Appeal to nature—"But humans are creative, empathetic and sensitive!" Indeed, they are. They also eat and poop, but I don't see exactly how that relates to their reasoning capabilities.

And, of course, the False equivalence: "AIs and humans both make mistakes, so they're not much different from each other." Right. Like a computer and a laundromat are equally good at calculations because both can equally suffer from power failures.

Finally, when people's arguments fail to impress me, there's always the inevitable Ad hominem: "You just hate people." Well, not everyone. But I certainly dislike those who make other people's lives miserable by peddling their bullshit.

Most of these social media "debates" aren't about reason at all. They're about ego. People aren't defending logic; they're defending the fragile belief that machines cannot replace them.

But what hope does the average human have when AIs already beat them at basic reasoning?

The Art of Selection

Speaking of biases…

Some people have accused me of Selection bias: "You're comparing the average AI to the average human," they say, "when you should compare it to genuine experts."

Sure. If I could text a cosmologist at 10 pm, I'd absolutely pick them over the algorithm. If Brian Cox were on speed dial, Gemini could twiddle its digital thumbs. If a certified therapist lived in my guest room, I wouldn't be asking ChatGPT for emotional triage. But that's not how the real world works. Experts are rare, expensive, and—tragically—human. They sleep, eat, travel, and have better things to do than answer my existential questions about quantum foam or organizational design.

It's an interesting case of reverse Availability bias: comparing what's ideally available to what's actually accessible for most of us. The right benchmark isn't "AI versus Einstein." It's "AI versus your brother-in-law who once watched a Neil deGrasse Tyson video."

And in that comparison, the machines win practically every time.

AI is the next best thing when the best thing isn't around—which, let's be honest, is most of the time. When I need health information as I'm puking my guts out over a toilet at 2 am, I'll take the probabilistic reasoning of a large language model over the frantic improvisation of my next-of-kin humans. The future isn't about replacing experts—it's about replacing everyone else who doesn't know what they're talking about. And yes, I might get around to calling the doctor in the morning.

"AI is the next best thing when the best thing isn't around—which, let's be honest, is most of the time."

Do you like this post? Please consider supporting me by becoming a paid subscriber. It’s just one coffee per month. That will keep me going while you can keep reading! PLUS, you get my latest book Human Robot Agent FOR FREE! Subscribe now.

The Age of Critical Thinking

People often ask me, "What's the most important skill for the age of AI?" And I always say it without hesitation: critical thinking.

You need critical thinking to know which questions to ask an AI. You need it to spot when the chatbot is bullshitting you with a confident hallucination dressed up as truth. And you definitely need it to survive the AI-generated sludge now oozing everywhere across social media.

But none of that compares to the sheer intellectual stamina required when dealing with actual humans. ChatGPT, Claude, and Gemini might occasionally hallucinate, but scroll through your news feed for ten minutes and tell me who's really the master of misinformation. Humans remain the undefeated world champions of confident stupidity. They don't even need large language models to produce vast quantities of nonsense—they're perfectly capable of generating it themselves.

And as for my own field—organization design and the future of work—don't get me started. After decades of transformational frameworks and other corporate nonsense, I sometimes wonder if coaches and consultants have done more long-term damage to companies than AI ever could.

But sure, let's keep raising the bar only for the machines. It seems we can write off the humans.

"Let's keep raising the bar only for the machines. It seems we can write off the humans."

Multiple Perspectives

I'll choose a discussion between three AIs anytime over a discussion between three random humans in a bar.

For example, this weekend, I witnessed a brilliant battle of digital minds between Gemini, ChatGPT and Claude. I had a rather technical question about the best approach to a database design challenge (in Fibery, which sits on top of Postgres). Because I had no direct access to specialists, I went for the second-best option: I asked each of the AIs for advice. Refactor or not?

Gemini told me to keep my current denormalized design as it is. However, both Claude and ChatGPT told me to refactor/normalize the database design for improved performance, leaving me rather confused.

Then, the fun started as I kept copy-pasting the results across three chat windows, watching a heated debate unfold about abstraction layers, persistent formulas, joins, views, triggers and whatnot. It helped that I'd given each of the AIs a personality. ChatGPT (Zed) is rather sarcastic, Gemini is quite condescending, and Claude is more philosophical. It was like watching a debate between three arrogant academics. They couldn't resist the occasional ad hominem jab at each other.

In the end, Gemini won. They all agreed I should keep my current database design denormalized as it is. Claude and ChatGPT reluctantly gave in and relented, while still bickering over some irrelevant details. Because we all need to feel that we're not completely wrong, right? (In that sense, the algorithms seemed almost human.)

In that same article I referenced earlier, I outlined six critical thinking approaches for dealing with overconfident machines. What I applied last weekend was the "Iterative Skeptic." It means I go full-on Delphi Method. I feed each AI the responses of the others and make them argue until they reach a consensus. And then still, after reading all their back-and-forth arguments, it is up to me to make up my own mind.

That is critical thinking.

Who Are the Real Bullshitters?

Yes, AIs are bullshitters—just like politicians, only with better grammar and without the cameras and autocue. They'll spin a plausible story when they don't know the answer, but at least they don't double down when proven wrong. In my experience, AIs are surprisingly gracious about correction. Show them a logical counter-argument, and they'll apologize, adjust, and move on—something most politicians would likely never do.

And here's the crucial part: on average, the answers I get from LLMs are far more accurate than the ones I get from all those adorable talking monkeys around me. Yet, when I share online that I asked my friends for advice on a technical problem, nobody bats an eye. But the moment I mention consulting three AIs, they're eager to give me lectures about bias, hallucinations, and the coming robot apocalypse.

It's the same story every time: When I'd ask my family about the cosmic microwave background, the social media mob would shrug and move on. But if I swap "family" for "ChatGPT," suddenly everyone is a digital ethicist with a torch and a pitchfork.

That right there is human bias in its purest form—technophobia dressed up as moral superiority. Sure, the machines make mistakes, but at least their bullshit comes with a hefty dose of humility and self-reflection. In just one year, I've had AIs admit and say, "Apologies, I was wrong," more often than I've heard from humans on social media in my entire lifetime. With people, the best you can expect is that they ghost you when your logical arguments become too inconvenient for them.

"That right there is human bias in its purest form—technophobia dressed up as moral superiority. Sure, the machines make mistakes, but at least their bullshit comes with a hefty dose of humility and self-reflection."

Compared to the daily deluge of confident nonsense in our social circles, the AIs are practically monks of reason.

Do you like this post? Please consider supporting me by becoming a paid subscriber. It’s just one coffee per month. That will keep me going while you can keep reading! PLUS, you get my latest book Human Robot Agent FOR FREE! Subscribe now.

Loving Machines and Humans

Let's get one thing straight: I don't actually hate people—though I admit, I do enjoy pretending. Humans are delightful creatures when they're funny, make art, or invent technologies. I love the creativity, adaptability, and sheer absurdity of our species. I consume human-made brilliance (books, TV, films) every single day. And I already look forward to cooking dinner for friends and family next weekend, and afterwards we might play a game of Clank! In such moments of connection and bonding, LLMs are not an option.

But then I log onto LinkedIn, and within three comments I'm reminded that reason is not exactly humanity's core competency. As Steven Pinker keeps patiently pointing out, rationality is an endangered species—occasionally sighted in academia, rarely found in the wild.

What I don't need is another outbreak of irrational technophobia masquerading as wisdom. Yes, the AIs don't "reason" in the philosophical sense—but let's be honest, neither do most humans. And by any measurable standard, the machines are already outperforming us in logic, humility, and basic coherence.

So next time I say I asked an LLM for advice, spare me the sermon about "token prediction." Because frankly, the soulless pattern-matching of the AI produces better outcomes than the average Karen who shows up at my door with a clipboard and a conspiracy theory.

When solving problems, I'll take the cold, consistent rationality of an algorithm over the warm, chaotic nonsense of the average human any day. The real bias isn't in the machines—it's in our desperate refusal to admit they might already be producing more coherent output than most of us are.

"If AIs are stochastic parrots, then humans are sophistic monkeys—noisy storytellers peddling nonsense as wisdom."

If AIs are stochastic parrots, then humans are sophistic monkeys—noisy storytellers selling nonsense as wisdom. I happily cook dinner, play games, and have fun with the wonderful monkeys in my own social circle. But when a real design challenge, professional problem, or academic puzzle arises, and with no true expert in sight, I'll take my chances with the stochastic parrots.

Jurgen

 •  0 comments  •  flag
Share on Twitter
Published on October 20, 2025 08:34

October 14, 2025

The Ten Commandments of Viability in AI

Your Survival Guide for When the AI Bubble Bursts

For creatives, builders, founders, and leaders steering their ships through the coming AI implosion.

I've been here before. In 1999, my first startup caught a brief ride on the dot-com rocket ship. For a few intoxicating months, I was headed for the stratosphere. Then the bubble burst in 2000, my rocket became a fireball, and I found myself free-falling with a parachute that wouldn't deploy. The landing wasn't graceful.

The current AI boom seems to follow a similar playbook. Even the perpetually optimistic billionaires are quietly hedging their bets. What felt like an unstoppable revolution is setting up for the classic sequel: mass layoffs, shareholder lawsuits, and company obituaries written as thoughtful Substack essays.

When the correction hits—and many think it will—organizations that mortgaged their future to LLMs and AI agents will scramble for oxygen. Their leaders will reach for the usual panic buttons: slash costs, pivot frantically, or pretend everything's fine while the ship takes on water. But a select few—those who built with both foresight and integrity—will emerge stronger. Not by exploiting the wreckage, but by navigating through it with their principles intact.

This isn't a playbook for predators. It's an operating manual for viability in AI for those who understand that real sustainability comes from building something worth preserving. These ten principles aren't predictions. They're commitments to create value that outlasts the hype cycle.

Here's the parachute that gives you a safe landing:

1. Build Optionality, Not Plans

Plans assume the future is predictable. Options assume it's chaotic. Stop pretending you can forecast what's coming next. Instead, design adaptable teams, modular systems, and multiple paths forward. Real options theory isn't academic theory—it's survival strategy.

The companies that thrive post-correction won't be the ones with the most detailed roadmaps. They'll be the ones who can pivot without losing their souls, adapt without abandoning their values, and change direction without changing their core purpose.

Agility isn't just about moving fast. It's about hoarding possibilities when everything around you is falling apart.

2. Master Signal Detection

The next shift won't arrive with fanfare in TechCrunch headlines. It'll seep through capital allocation patterns, hiring freezes, and whispered conversations about down-rounds. The smart money is already repositioning, but they're not broadcasting their moves.

Learn to read between the lines. Follow funding flows, track talent movements, and pay attention to which companies are quietly scaling back their AI investments. Read all you can about AI and talk with many people. But don't hoard your insights—share them with your network. Collective intelligence beats isolated genius every time.

Build communities of practice where signal detection becomes a shared capability, not a competitive advantage. When everyone sees around corners, more people survive the crash.

3. Strengthen Your Permanent Assets

Your reputation, relationships, and proprietary data aren't just valuable—they're irreplaceable. Guard them accordingly, especially the trust others place in you. In downturns, desperation drives bad decisions. The temptation to monetize private data intensifies when revenue goals start looking impossible.

Resist that temptation. Ethical data stewardship isn't charity—it's the foundation of long-term value creation and public legitimacy. Companies that violate user trust during tough times don't just lose customers; they lose access to the talent and partnerships they'll need for recovery.

Your permanent assets compound over time, but only if you don't liquidate them for short-term survival.

4. Build Cultures of Judgment

Everyone's learning to write better prompts. But few are learning to think more clearly. Technical fluency is table stakes now; judgment is the differentiator.

Real judgment isn't a top-down mandate. It's a cultural capability. Reward critical thinking over quick execution, and teach your teams to recognize when not to automate. Create spaces where people can question assumptions, experiment safely, and improve each other's reasoning.

The future belongs to organizations that can think better together, not just execute more efficiently alone. Competence shared becomes competence multiplied.

5. Stay Grounded

Hype is intoxicating; honesty is sobering. One builds unsustainable expectations; the other builds durable organizations. Keep your team informed, not high on their own supply.

Transparency builds trust even when the news isn't good. Actually, especially when the news isn't good. Calm realism will outlast performative optimism every single time. The strongest leaders tell their teams the truth before the market forces them to hear it.

This doesn't mean becoming a pessimist. It means becoming a realist who can distinguish between genuine progress and elaborate storytelling.

Do you like this post? Please consider supporting me by becoming a paid subscriber. It’s just one coffee per month. That will keep me going while you can keep reading! PLUS, you get my latest book Human Robot Agent FOR FREE! Subscribe now.

6. Monetize Reality, Not Illusion

When the music stops, unit economics becomes the only song that matters. Focus ruthlessly on paying customers, sustainable business models, and reliable value delivery. Test every assumption about "scale" and "engagement" and "network effects."

And when you inevitably screw something up—because you will—own it publicly. Accountability isn't damage control; it's how trust compounds under pressure. Companies that admit mistakes and fix them quickly earn more credibility than companies that never seem to make mistakes at all.

Reality has a way of asserting itself. Better to work with it than against it.

7. Treat Vendors as Utilities—and Secure What Matters

Foundation-model providers aren't your strategic partners—they're critical dependencies. Diversify across APIs and ecosystems, and harden your security while you still have resources to do it properly.

Budget cuts invite security breaches. Cost-cutting often targets "invisible" expenses like cybersecurity until those expenses become very visible in the form of data breaches, regulatory fines, and reputation damage.

Protect your data, your systems, and your people. Paranoia isn't cynicism—it's operational maturity. The companies that survive disruption are the ones that don't disrupt themselves through preventable failures.

8. Design for a Long Winter

Frugality is intelligent; cruelty is stupid. Cut waste aggressively, but protect human dignity religiously. Maintain psychological safety and fairness even when budgets are constrained. The talent you retain through tough times becomes your competitive advantage when conditions improve.

While you're optimizing for financial efficiency, optimize for environmental efficiency too. Choose sustainable compute options, reduce energy waste, and build for regeneration rather than extraction. Surviving the next decade requires respecting the decades that follow.

Winter preparedness isn't just about cash reserves—it's about building systems that can function under stress without breaking the things that matter most.

9. Invest in What AI Can't Replicate

When everything becomes automated, meaning becomes the scarcest resource. Double down on connection, curation, synthesis, and empathy—the irreducibly human capabilities that create lasting value.

But don't make this work feel like a burden. Keep collaboration playful and creative. The best work happens when people genuinely want to show up and contribute. Joy isn't a nice-to-have perk. It's a renewable energy source for sustainable performance.

The future belongs to organizations that can blend human creativity with machine capability without losing what makes the work worth doing.

10. Steward the Correction, Don't Exploit It

Market corrections hit different groups differently. Underrepresented workers typically suffer first and longest. When opportunities arise—acquiring talent, absorbing assets, expanding market share—do it equitably and transparently.

Diversity isn't a luxury during tough times; it's resilience insurance. Use whatever stability you maintain to rebuild the ecosystem stronger, not to pillage the wreckage for short-term gain.

The correction will test more than your business model. It will reveal whether you're a builder or an opportunist, whether you create value or just extract it.

Do you like this post? Please consider supporting me by becoming a paid subscriber. It’s just one coffee per month. That will keep me going while you can keep reading! PLUS, you get my latest book Human Robot Agent FOR FREE! Subscribe now.

Your Biggest Test

The coming AI correction won't just measure technical competence. It will expose character. It will separate creators and builders from idolators and speculators and sustainable growth from unsustainable extraction.

Your job isn't to predict the next hype cycle. It's to build organizations that can bend without breaking, teams that can adapt without losing hope, and systems that honor both human potential and planetary boundaries. Because progress without principles isn't progress. It's just expensive chaos with better marketing.

The measure of your success won't be how well you rode the bubble. It will be how thoughtfully you stewarded your resources, relationships, and responsibilities when the easy money disappeared and the hard work began.

Good luck.

Jurgen

 •  0 comments  •  flag
Share on Twitter
Published on October 14, 2025 04:13

October 9, 2025

Scrum Is Done. Finished. History.

We're going to need dual-track business agility.

For twenty years, Scrum was the Shinkansen (新幹線) of work management. The fastest kid on the block. Now, it's the family minivan blocking the left lane while AI agents are flashing their headlights. Once, Scrum was the fast lane—the bullet train of modern work, the agile manifesto made flesh. But times change, work accelerates, and the machines aren't waiting for standups and retros.

We're going to need dual-track business agility.

On High-Speed Trains

Almost every week, I ride the rails between Rotterdam and Brussels because my hubby insists on working in the epicenter of European bureaucratic madness—a.k.a. Brussels. So yes, I've become something of a connoisseur of the Dutch–Belgian train experience.

What never fails to baffle me: the so-called "international" train between Amsterdam and Brussels stops at exactly two Dutch stations—Rotterdam and Breda—before deciding that the entire countryside of Flanders deserves a personal visit. Noorderkempen, Antwerpen-Centraal, Antwerpen-Berchem, Mechelen, and what feels like every lamppost within a ten-kilometer radius of Brussels get their moment of glory and a dedicated stop. On the Dutch side, the train gallops like a racehorse. Cross the border to Belgium, and suddenly we're on a sightseeing tour led by a donkey.

The reason for the epic slowdown is that Belgium's rail infrastructure is decades behind. Most NMBS trains cannot handle high-speed travel or the signaling systems on Belgium's shiny high-speed line (the one that Eurostar actually uses). So, regular "international" trains are forced to plod along the old tracks, reminiscing about the good old days when steam engines were still a thing.

Sure, the newer Eurocity Direct service skips most of Belgium's pit stops and moves at something resembling modern speed. But for the rest, the Belgian side remains a bottleneck. Acceleration of the entire experience is held hostage by legacy systems.

On Self-Driving Cars

In the case of automotive modernization, the situation isn't much different. The great unsolved puzzle of self-driving cars isn't the car. It's the road. We expect the machines to behave flawlessly in an environment designed for caffeine-addled primates with varying levels of attention span and regard for traffic law. Trains have it easy: they follow their own private rails, ruled by standardized tracks and signals. Cars, on the other hand, must navigate a chaotic socio-technical mess that barely qualifies as a transportation system.

Public roads are an anarchic wonderland of half-erased lane markings, contradictory signage, and intersections designed by surrealists. Humans get through it all using habit, instinct, peer pressure, and perhaps a touch of collective telepathy. Machines, meanwhile, have to translate all that daily madness into math—parsing every pothole, pigeon, and pedestrian into navigable data. No wonder they freeze up at what engineers politely call "edge cases," otherwise known as "daily problems."

The obvious solution—giving robot vehicles their own spotless, rule-abiding infrastructure—would make things much simpler. Imagine highways that actually talk to the cars, with no tuk-tuks, beer bikes, or BMW owners randomly darting left and right. But that would require a complete redesign of a parallel system and decades of urban surgery. That's not happening anytime soon.

So, the dream of full acceleration inches forward at human speed, with humans as the major bottleneck, trapped on roads built for our own chaos.

On Dedicated Lanes

Progress always seems to happen when someone finally decides to stop sharing the road. The Japanese figured this out decades ago: their legendary bullet trains glide on exclusive tracks where nothing slower can trespass. The result is precision, speed, and a nation that sniggers at European timetables. The same principle shows up everywhere—from bus-only or taxi-only lanes (where commuters and the elite whisk past the traffic jam apocalypse) to airport priority queues (where business travelers and loyalty card holders laugh at the endless line of holiday tourists).

Even in agile software development, the idea pops up in the form of dual-track development, where teams keep discovery work separate from delivery tasks: one lane for exploration, one for execution. And it appears on Kanban boards with different swim lanes for different classes of service. Everyone gets where they're going, but different items move at different speeds. The whole system accelerates with fewer head-on collisions.

That's how you fix bottlenecks in any complex system: you stop pretending everyone moves at the same speed. You give the fast lanes to those who can handle them, and the slow lanes to those who can't. Acceleration isn't about equality—it's about design. The world runs smoother when we stop forcing trains, buses, taxis, horses, donkeys, and ideas to all share the same track.

Once, Scrum Was the Fast Lane

Twenty-five years ago, Scrum was a revolution disguised as a framework. At a time when organizations moved like lame donkeys, Scrum taught teams to sprint—to think small, ship fast, and learn faster. It gave developers permission to talk to customers, managers a reason to get out of the way, and organizations a method to adapt without falling apart. Sprints, backlogs, stand-ups, and retros weren't just rituals; they were acts of rebellion against the lumbering giants of waterfall thinking.

For a glorious few decades, Scrum was the fast lane—the place where motivated teams could outrun corporate inertia and rediscover a sense of flow, on their own separate track. It turned work into something living, iterative, and human. Scrum didn't just change how software was built; it changed how people thought about change itself. Scrum was the high-speed train with a dedicated lane that everyone aspired to board. It proved that you can go fast if only the entire organization can adapt to the new infrastructure.

Now, Scrum Is the Slow Lane

Scrum was built for humans—for task boards, sticky notes, and the beautiful chaos of group discussion. But the future won't wait for our bi-weekly cycles. Hybrid teams of humans and AIs are emerging, and there's no reason for them to play by Scrum's rules. Algorithms don't need stand-ups, they don't forget tasks, and they certainly don't waste time arguing about backlog grooming versus backlog refinement. Their world runs on digital time, not biological rhythms.

Trying to make AI agents follow Scrum is like forcing self-driving cars to yield at every crosswalk. In traffic, that makes sense. In the enterprise, it's absurd. The rituals that once liberated humans now shackle their digital colleagues. As artificial minds join our teams, they'll demand their own lanes, their own cadence, and their own logic of collaboration. The challenge ahead isn't to make AIs agile—it's redesigning business agility itself, so humans don't become the bottleneck in their own product development process.

Acceleration is the new Agile. But only when we take Scrum out of the loop.

Do you like this post? Please consider supporting me by becoming a paid subscriber. It's just one coffee per month. That will keep me going while you can keep reading! PLUS, you get my latest book Human Robot Agent FOR FREE! Subscribe now.

A Separate Lane for Humans

Scrum is aging infrastructure—a beautiful relic of the human era. It was invented for teams made of coffee, cortisol, and collective confusion, not for digital minds that run on electrons and precision. Every core practice in Scrum—timeboxes, task boards, standups—was crafted to manage the messy psychology and sociology of human collaboration. It was never meant for beings who don't get tired, bored, or passive-aggressively silent in retrospectives.

Take sprints, for example. They make perfect sense for humans because we need external boundaries to protect our mental bandwidth. Sprints offer relief from chaos: a rhythm, a finish line, a fleeting sense of progress. Those little cycles of focus and closure reduce overwhelm, boost motivation, and manufacture the illusion of momentum. Sprints keep teams socially synchronized. Shared reflection creates rhythm, trust, and accountability. Scrum's cadence is emotional engineering disguised as process efficiency.

Then there's visibility—the sacred Scrum board. It's not just about tracking tasks; it's about regulating anxiety. Humans need to see work moving, because visible progress produces dopamine, and dopamine keeps everyone from quitting their jobs to raise alpacas. Shared boards soothe the uncertainty of complex work. They make the invisible visible, fostering belonging, collaboration, and mutual responsibility. It's not technology—it's therapy with post-its.

And finally, the rituals: standups, retrospectives, and other ceremonies that serve as daily recalibrations of morale and sanity. Humans need rhythm and repetition to stay aligned. We crave communication, validation, and a sense of shared struggle. These rituals are social lubrication—keeping egos balanced, conflicts exposed, and emotions processed just enough to keep the engine running.

But AIs don't need this psychological scaffolding. They don't lose focus, question purpose, or spiral into existential dread after a failed sprint. For them, Scrum's social engineering is dead weight. Machines don't need morale—they need throughput. Which is why the future of work requires two lanes: one optimized for emotional mammals, and one for emotionless speed demons.

A Separate Lane for Machines

Machines don't need motivation—they need architecture. Their version of "teamwork" isn't about feelings, feedback, or fairness; it's about alignment, context, and precision. Where humans thrive on collaboration and meaning, robots thrive on clarity and constraints. To make them work well together, we don't need managers or coaches—we need orchestrators, engineers, and protocols that define how intelligence flows between systems.

AI agents operate in ecosystems, not teams. They don't hold standups; they exchange data. Their version of communication isn't conversation—it's coordination. The human equivalent of empathy for them is context engineering: giving each agent the exact information, objectives, and parameters it needs to act without hesitation. The quality of their work depends on how well we define context, not on how we "invite collaboration" through the brevity of our user stories.

Instead of timeboxes, machines need surveillance. Continuous monitoring, evaluation metrics, and automated corrections. It's not about morale; it's about maintaining integrity. AIs don't celebrate sprint reviews; they update models, retrain weights, and adjust probabilities. We measure their learning in tokens per second, not in conversations per week.

And where humans need autonomy to feel trusted, AIs need governance to stay useful. Command-and-control isn't tyranny in their world—it's how you prevent chaos. The faster teams go, the more important safety and security become. Workflow orchestration replaces management, connecting agents, APIs, and models into coherent value chains wrapped in responsible AI. The challenge isn't to motivate them; it's to keep them from accelerating themselves into a corporate train wreck.

This is the infrastructure of the robotic lane: context, orchestration, and control. No sticky notes. No therapy sessions disguised as meetings. Just systems talking to systems, relentlessly optimizing for output. The only emotion involved is human amazement—if we manage to keep up.

Dual-Track Agility

The future of agility isn't a single highway—it's a split system. Like the high-speed trains of Japan or the security lines at Schiphol Airport, the next generation of work needs separate but connected lanes—one for humans, one for machines. The old dream of "one framework to rule them all" is quaint. We're past that. Humans and AIs move at different speeds, think in different patterns, and fail in completely different ways. Forcing them to share the same lane is how you get gridlock, not progress.

Dual-Track Agility is about building parallel infrastructures: a human lane optimized for collaboration, emotion, and meaning, and a machine lane optimized for precision, scale, and speed. But the real magic lies in the interchange—where work hands off between slow and fast, between intuition and computation.

Future agility won't just manage sprints and backlogs; it will choreograph flows across species of intelligence. It will define when the humans explore, when the robots execute, and how both adapt in sync. The new Agile isn't about treating everyone equally—it's about designing work ecosystems where every kind of intelligence operates at its natural pace, connected through protocols, not post-its.

The future of work is parallel, not sequential.

The Shift of the Dominant Lane

Right now, most organizations still live entirely in the slow lane. Scrum boards covered in Post-its, sprint reviews with too many humans and not enough data—an ecosystem built for biological bandwidth. AIs are there, sure, but mostly as interns: they write the meeting notes, crunch a few numbers, summarize yesterday's chaos. The humans still steer.

But that balance won't last. The gravitational pull of acceleration is relentless. As automation scales and agents grow smarter, more and more work will migrate to the fast lane. Tasks that once required coordination meetings, handoffs, and polite Slack messages will become fully automated flows. Companies won't just do things faster—they'll do more things with the same number of people. The ratio shifts: fewer hands on keyboards, more minds orchestrating AI agents.

In this new reality, the fast lane becomes dominant. Humans won't vanish; they'll evolve into conductors—designing workflows, defining ethics, and setting goals for fleets of autonomous agents. Agile will adapt accordingly, shedding its people-centric rituals in favor of orchestration protocols, context management, and dynamic governance.

The Agile of the future won't ask, "How can humans work better together?" but "How can humans and machines think, act, and adapt together at scale?" The answer will define the next industrial revolution.

Do you like this post? Please consider supporting me by becoming a paid subscriber. It's just one coffee per month. That will keep me going while you can keep reading! PLUS, you get my latest book Human Robot Agent FOR FREE! Subscribe now.

Scrum is Done

Scrum is done the way the Elizabeth Line and the Gotthard Base Tunnel are done—magnificent feats of engineering, celebrated milestones in human coordination. They're symbols of what disciplined collaboration can achieve, and they'll serve their purpose beautifully for decades. But even monuments age. Maintenance crews will keep repainting, reinforcing, and retrofitting, just as the Scrum Guide gets its periodic facelift to look "modern" again. Yet the foundations remain what they are: solid, admirable, and increasingly outdated. Scrum's legacy will endure—not because it's still fast, but because it once was.

Scrum is Finished

Scrum is finished the way good books are finished. I still sell Management 3.0 and Managing for Happiness, and they're holding up quite nicely, thank you very much. The principles are sound; the metaphors still land. But the world they were written for is fading in the rearview mirror. Scrum, too, captured the zeitgeist of its era—when human teams needed structure to move faster. It was a product of its time, like all revolutions. Now, the revolution has moved on. The next edition won't be a rewrite—it will be a replacement.

Scrum is History

Scrum is history the way Star Wars and The Terminator are history—aging classics that defined genres. People still quote them, rewatch them, and study them to understand how greatness was once achieved. They remain inspiring precisely because they remind us how far we've come. Scrum deserves that same reverence. It's not being buried; it's being honored. The framework taught millions how to collaborate, inspect, adapt, and deliver in a world that desperately needed those skills. It earned its place in the canon.

Scrum is Not Dead

But let's be clear—Scrum isn't dead. Dead things stop mattering. Scrum still matters. Teams will continue to use it for years, perhaps decades, just as people still read old books and quote old films. Just like some people enjoy a ride on one of Belgium's ancient local trains. But the center of gravity for business agility is shifting. Scrum is no longer the default speed for progress—it's the scenic route, the human lane, the infrastructure that once carried us into the future but now connects us to our past.

Scrum will remain valuable, just not dominant. The world is accelerating toward the fast lane—where humans orchestrate and machines execute. In that new landscape, Scrum will stand proudly in the museum of modern work: respected, loved, and outpaced by the future it helped to create.

Jurgen

 •  0 comments  •  flag
Share on Twitter
Published on October 09, 2025 06:03

October 6, 2025

How to Cope with Uncertainty About the Future

Stop worrying about strategy. Start developing your options. Don't make a plan; make a platform.I'm feeling more anxious and uncertain than ever about my future. And yet, I stopped making strategies. No more pitch decks. No more business plans. No more daydreaming about unrealized fortunes. I now have a different way of coping with uncertainty about the future. I work hard on my platform of options.

My accountant inquired about my business plans and revenue forecasts for the next three years. They "needed" this to create the annual reports.

Are you kidding?" I replied. "I don't even know what I'll be doing next month!"

Welcome to The Maverick Mapmaker — where M-shaped, multidisciplinary professionals learn how to orchestrate work between humans and AI. If you refuse to be put in one box… if your mix of skills is your edge… if you want tactics for thriving in the age of AI without falling for hype or doom — this is your place. Stay sharp, stay multidisciplinary, and stay ahead. Subscribe to my Substack now.

Strategic Planning Is Pointless

Strategic planning is the business world's favorite fortune-telling exercise. But relentless strategizers and forecasters are often as confident as they are mistaken. Strategy is a dangerous trap for any freelancer, entrepreneur, and multidisciplinary professional. I speak from personal experience here. I could build a replica of the Kremlin out of failed business model canvases.

While the fortunetellers among us are busy crafting color-coded strategic maps for scenarios that will never materialize, their smarter peers are sharpening their current networks, growing their reputations, and expanding their platforms. Every hour professionals spend agonizing over next year's strategy is an hour not spent building today's advantages: better data, stronger relationships, deeper expertise, and sharper instincts.

Strategic planning is mostly theater.

I'm not worshipping strategic chaos or nihilism here. I just recognize that in our current volatile business landscape, the obsession with prediction and forecasts reveals a fundamental mismatch with how the future usually plays out. Markets don't follow our roadmaps in PowerPoints. Opportunities never arrive on schedule. And black swans never text us their ETAs.

"Life is about not knowing, having to change, taking the moment and making the best of it, without knowing what's going to happen next." — Gilda Radner, comedian

The best way to cope with uncertainty about the future is not about predicting what will happen next. Strategy is building and positioning your platform to capitalize on whatever happens tomorrow. We must stop obsessing over next year's plans and start developing today's options.

How I Cope with Uncertainty

I learned this lesson the brutal way.

For years, I'd been the guy with the grand visions, ambitious business plans, and meticulously crafted maps and canvases. I looked like someone who had it all figured out. In reality, it appears I was just skilled at fantasizing elaborate theories about a business landscape that kept shifting beneath my feet as I frantically tried to chart my course. I was a maverick mapmaker traveling a sea of landslides.

Not anymore. Forecasting my business is like predicting the final ranking of next year's Eurovision Song Contest without even knowing which countries are going to participate. I abandoned that approach.

I stopped pretending I could predict the future and instead started building the capacity to copy with uncertainty about the future. Instead of betting on carefully crafted pitch decks and value propositions, I started cultivating a portfolio of possibilities and growing a platform of options.

For example, I now spend considerable time developing and refining what I call my Human Attention System—collecting, organizing, and nurturing all my contacts across Substack, LinkedIn, Gmail, Google Calendar, X/Twitter, and half a dozen other platforms. It might sound mundane, even boring. But in terms of creating options, it's like spinning gold:

No AI workflows or agents will function without good data. It's garbage in, garbage out. Instead of worrying about specific value streams I might develop someday, it's smarter to grow a system with healthy data that will turbo-charge any value stream I discover and develop in the future.

Data, network, reputation, and infrastructure create moats around your business in ways that algorithms can't. Processes can be easily copied, reverse-engineered, or spat out by the latest version of Claude. But nobody can replicate the four moats and the thousands of relationships I am nurturing. Nobody can duplicate my platform.

While automation creates abundance, what becomes scarce and precious is human attention. No matter which direction your business evolves, what's increasingly valuable is having the attention of the people you know. They cannot engage with you and doom-scroll through TikTok at the same time.

The possession of excellent data can actually create business opportunities you weren't even seeking. Intelligence begets innovation in ways we cannot even imagine today. If you're not actively listening for signals in the noise, you'll never have that business idea that could lift you from the swamp next year.

That's why I stopped worrying about the future and why I'm not developing any business strategies for a while. It feels like squandered time. Instead, I spend hours improving my data, growing my network, nurturing attention, and listening for signals. This is how I increase optionality.

The Science of Optionality

Complexity researchers and systems thinkers have understood optionality for decades. Increasing optionality is a strategic approach that helps both individuals and organizations cope with uncertainty by expanding the range of choices and pathways in changing or unpredictable environments.

From a systems thinking perspective, organizations are interconnected parts within larger systems. In uncertain environments, rigid plans often collapse. Having multiple options means building redundancy—if one path fails, others remain available. It creates resilience, allowing the system to adapt to disruptions because alternatives exist. And it establishes positive feedback loops where experimentation and learning from many options drive anti-fragility.

Complexity science studies (among other things) how small changes can have massive impacts in unpredictable, nonlinear systems. Optionality works by exploring the solution space—more options mean more chances to discover beneficial actions when dynamics shift. It helps avoid path dependency, preventing you from getting locked into a single approach when you need to pivot. And it enables emergence, where new solutions may appear that could not have been predicted in advance.

The financial math is unforgiving as well: every hour spent perfecting plans for scenarios that won't materialize is an hour not spent developing real options for undiscovered scenarios that will. It's opportunity cost in its purest form—trading actual adaptability for the comforting illusion of certainty.

That's enough systems jargon for now, I think.

To put it more simply: rather than agonizing over future success, spend time growing your platform. Whatever value you choose to create next year, the platform you build now will enable and accelerate it.

Why This Matters Now

Planning is the last refuge of control freaks in an otherwise uncontrollable world.

While everyone's busy crafting strategies and roadmaps, reality keeps rewriting the rules faster than we can read them. Entrepreneurs clutching their strategic plans look suspiciously like generals armed with assumptions about a battlefield that is unlike anything they've ever experienced. They charge into enemy territory with a plan promising swift victory, but four years later they're still frantically defending the first twenty kilometers they captured. (Sound familiar?)

When disruption hits—and it always does, doesn't it?—the winners aren't the ones who saw it coming or planned for it. They're the ones who had the people, relationships, and resilience to surf the uncertainty and exploit the opportunity, while their competitors were updating their forecasts somewhere underwater.

I'm not advocating abandoning all structure. I simply recognize that in a world where the map changes faster than we can redraw it, your skills using a compass matter more than cartography. Our goal isn't about giving up. It about giving in. It's about developing an antifragile platform that gets stronger every time our assumptions shatter.

The future belongs to those who build capability, not those who build scenarios. So let's stop trying to map the unknowable future. (Yes, I know. This sounds rather ironic coming from someone who calls himself The Maverick Mapmaker.) Instead, we should make ourselves unmappable to everyone else—more connected, better informed, and less predictable.

When everything changes, we don't want to be the player with the best cards for a game that no longer exists. We want to be the ones who learned how to excel in any game that lands on the table.

Now, when someone asks me about my plans, I say, "My plan is to be in optimal shape for whatever gets thrown in my path."

The Practice of Uncertainty

True strategic thinking isn't about trying to forecast the answers. It's about staying fluid enough to ask better questions when the world inevitably zigs while everyone's plans are busy zagging. Coping with uncertainty about the future means creating a launchpad, not a crystal ball. Developing options, not certainties.

For me, this means returning each day to the unglamorous work of cleaning data, automating my network, building relationships, and developing capabilities. I'm not anxiously plotting scenarios. I'm steadily growing my Human Attention System and we'll see what unplanned fortunes it brings me. I'm tired of being the guy holding a beautifully crafted map leading absolutely nowhere.

I'm done with the pitches and PowerPoints.

"We can't be afraid of change. You may feel very secure in the pond that you are in, but if you never venture out of it, you will never know that there is such a thing as an ocean, a sea." — C. JoyBell C., author

Never before have we faced such uncertainty.

I see it every day in my news feeds and social media threads. I discuss it often with friends, family, and freelancers who find themselves in more challenging situations than I face. Everyone wants to know how to deal with an uncertain future. The answer is not yet another business model canvas with strategic scenarios. For most of us, our future is not an AI-generated pitch deck with AI-generated forecasts.

Our future is a human-developed platform with a dizzying range of options.

Stop planning. Start building.

Jurgen

Subscribe now to join M-shaped rebels orchestrating work with AI.

 •  0 comments  •  flag
Share on Twitter
Published on October 06, 2025 05:24