Faisal Hoque's Blog

May 2, 2026

Be More Bored

We’ve engineered boredom out of existence. But boredom fuels creativity, self-reflection, and growth. It’s time to reclaim the void.
KEY POINTSAI and technology have made boredom nearly impossible – and that’s a serious problem.Research shows boredom activates the brain’s default mode network, sparking creativity and innovation.Without boredom, children never learn to generate meaning; adults never pause to examine their lives.Reclaiming boredom requires deliberate effort: Schedule nothing, walk without input, sit with silence.

I was in the check-out queue of my local supermarket. It was long and moving slowly. I’d forgotten my phone in the car, so I just stood there, waiting as the minutes dragged out. I stared at the cashier. I stared at the aisles. I stared at the ceiling. Hmm. What could I stare at now, I wondered?

I felt … weird. Not angry, not excited, not sad, not happy. What was this strange feeling?

Suddenly, I got it.

I was bored. It had taken me a moment to recognize the feeling because it had been so long since I had last felt it.

The machine is always there, isn’t it? The moment the mind begins to wander, we reach. Back in prehistory, there was thing called a radio, and then came television. People filled their evenings with shows. Then came Google and YouTube, followed by the hellscape of social media. Always something to do, always something to keep us occupied.

AI takes this a step further. Social media can ignore you—your post gets no likes, the group chat moves on without you—but algorithms never will. They always have time for you. And it’s not just time, either. They always care about you, attend to you, shape themselves around you and remember what you care about. Algorithmic engagement never runs dry.

It is practically impossible to be bored nowadays because there is something of interest just inches or seconds away. Perhaps this is progress of a sort, but it is also destroying something essential.

Boredom Fuels Innovation

We treat boredom as if it were a disease: something to cure, escape, or optimize away. But research tells a very different story.

Psychologists Sandi Mann and Rebekah Cadman at the University of Central Lancashire in the UK found that boredom can be surprisingly good for creativity. In their studies, people who first endured a dull task —such as copying or reading phone numbers from a directory—generated more creative responses afterward than those who skipped the boredom. The point isn’t that boredom kills the mind. It may be what nudges the mind to wander, with that wandering then leading to good ideas.

Neuroscience helps explain why this happens. When nothing external fully claims our attention, the brain’s default mode network (DMN) comes online. This is the system involved in memoryimagination, self-reflection, future thinking, and mind-wandering. It’s not a magic creativity button—insight also needs other networks that test and refine ideas—but the DMN helps explain why good ideas so often arrive in the shower, on a walk, or during a dull task. When we’re bored, the mind has fewer external hooks to grab onto. So it turns inward, wandering through memories, possibilities, and associations, which is how innovation happens.

What We Lose When We Lose Nothing

It’s about more than innovation. Without boredom, children don’t develop the internal resources to generate their own meaning. A child who is never bored—whose every idle moment is filled by a screen or an app or a voice that responds—may never learn to tolerate the discomfort that precedes imagination. The imaginary worlds, the strange games, the bizarre stories children invent: These are born in the empty hours. Take away the emptiness and you take away the ability to create magic from nothing.

The cost is no less significant for adults When every moment is filled with engagement, we never encounter ourselves in silence. We stay stimulated—and never pause long enough to ask whether any of it matters.

Without boredom, we lose one of the most important conditions for deep self-reflection. We lose that unstructured empty time when the questions that we have been avoiding bubble up and confront us: What do I actually want? What am I avoiding? Is this the life I intended to build?

Some of the most important steps in my own journey came from moments of nothing, stretches of boredom in which the absence of stimulation forced my mind to wander into territory I wouldn’t have chosen consciously. Boredom was the uninvited guest that brought the gift I didn’t know I needed.

Boredom won’t return on its own. Machines are too available, too responsive—simply too good at filling the gap. In our time, reclaiming boredom must be a deliberate act.

Here are five ways to start:

Schedule nothing. Block 20 minutes with no input: no phone, no book, no meditation app. Not mindfulness. Just emptiness. Let your mind do whatever it does.Delay the prompt. When you want to ask the machine, wait. Give your own mind 10 minutes with the question first. You may be surprised by what surfaces.Walk without input. No podcast, no music, no notifications. Let boredom accompany you. Notice what your mind reaches for when nothing is offered.Do something dull on purpose. Wash the dishes without a podcast. Fold laundry in silence. These aren’t just chores; they’re invitations for the default mode network to do its work.Protect your children’s boredom. When they say “I’m bored,” resist the urge to fix it. That boredom is not a problem. It’s the beginning of something they need to build themselves.Tune Out. And Let the Silence Return

I paid for my groceries and walked back to my car. Immediately, I looked at my phone. Thirty-seven notifications while I had been gone and new pings coming in even as I looked. I went to my email and was about to start typing. Then I stopped and turned my phone off.

Technology offers us infinite engagement. But too much engagement is noise.

The truth is, we have a deep need for silence. The empty space is not a void to be filled. It’s fertile ground. It’s where creativity germinates, where self-knowledge forms, where the questions that matter most have room to surface.

Let me be clear. I am not saying: use the silence to be mindful or meditate. I am not suggesting a clever way of optimizing the emptiness.

I am saying: be bored.

Let the empty space be empty.

Let the silence be silent.

[Feature Photo Source: luismolinero/Adobe Stock]

Original article @ Psychology Today.

The post Be More Bored appeared first on Faisal Hoque.

 •  0 comments  •  flag
Share on Twitter
Published on May 02, 2026 14:40

April 30, 2026

Here’s how to jump-start your company’s responsible AI governance in 90 days

Fast company logoThis step-by-step road map will guide you through the essential foundations for managing AI ethics, accountability, and human impact.

This month, Anthropic announced that it had built an AI model so powerfulit couldn’t be released to the public. Claude Mythos had autonomously discovered thousands of critical security vulnerabilities across all major operating systems and web browsers. Anthropic chose to make the model available only to a consortium of technology companies, giving them an opportunity to patch vulnerabilities and strengthen defenses before models with similar capabilities inevitably fall into the hands of those who would exploit them.

This development shines a light on the potential future dangers that the rapid evolution of AI models brings with it. These kinds of powerful models will proliferate, and their spread will create an escalating need for governance policies rooted in the principles of responsible AI. The practice of responsible AI aims to ensure that as AI systems grow more powerful, they remain fair, explainable, and subject to human oversight—governed by ethical principles and accountable structures that protect the people those systems affect.

Responsible AI is not something businesses can set aside for the moment and hope to implement in the future. Every AI system deployed without an adequate governance framework creates reputational, legal, and operational risk right now. Those risks will only compound over time. And the dangers are not only technical. A recent survey of 750 CFOs projects roughly 500,000 AI-related job losses in 2026 alone. Responsible AI must account for the societal impact of these systems, not just the operational risks they pose to the organizations that deploy them.

THREE PILLARS OF RESPONSIBLE AI

Ethical foundations. An AI use policy—a list of what people can and cannot do with AI tools—feels concrete and actionable. But a use policy sits downstream from the values that it formalizes. Before developing specific policies, the first thing you will need is clarity about what your organization stands for: the principles that will both guide policies and shape immediate decisions when technological advances blow past current guidelines.

Accountability and oversight. Responsible AI fails when nobody owns it. You need clear answers to key governance questions: Who can approve an AI deployment? Who can halt one? And who is accountable to the board when something goes wrong? Organizational accountability is a vital starting point but it is not enough on its own. You’ll also need frontline safeguards that keep humans meaningfully in the decision-making loop, especially when it comes to matters of safety and enduring consequences.

Human impact. Every AI deployment affects real people—people whose work changes, who lose their jobs, whose options are shaped by algorithmic decisions, and whose opportunities expand or contract in accordance with the scope of the new models. A responsible AI approach means being thoughtful and deliberate about the human effects of deployment, and actively designing for fairness, dignity, and human augmentation rather than replacement.

The 90-day plan that follows is built on these three pillars.

DAYS 1-30: MAP

The temptation with any governance initiative is to start building immediately. Resist that impulse. The first 30 days of this plan focus on mapping your AI landscape. In most organizations, the AI footprint is significantly larger, more fragmented, and less governed than leadership believes.

1. Map your AI landscape. Inventory every AI system used by the organization or that touches the organization in a significant way, including through “shadow use” of unsanctioned AI systems by employees. In most cases, the number will be significantly higher than leadership initially expects. For each use case, document what the AI does, what data it uses, who it affects, and who is responsible for its governance.

2. Force the worst-case conversations. For every AI use case you identify, ask your leadership team: What’s the worst-case scenario here? This approach is based on the catastrophize step of the CARE framework for AI risk management; the worst-case scenario is deliberately named to provoke the right mindset. The disciplined practice of imagining catastrophic failure aims to surface risks that would otherwise go unnoticed.

3. Triage. In some cases, the risks you uncover won’t be able to wait for you to develop a polished governance infrastructure. If the mapping and catastrophizing processes reveal that an AI system is making consequential decisions with no oversight, no explainability, and no clear owner—escalate the problem immediately. Pause the use of the system or place it under close human review. You don’t need a complete governance framework to act on an obvious risk.

4. Diagnose your culture. None of the governance structures you are about to build will work if your organizational culture isn’t actively engaged with them. You need to answer one fundamental question: Does your organization treat responsible AI as a business priority or as a compliance box to be checked? If the answer is the latter, a comprehensive culture change initiative will be required.   

5. Map your decision rights. You need clear answers to four questions:

a. Who can approve a new AI deployment?

b. Who decides when a system requires governance review?

c. Who can halt a deployment?

d. Who can reallocate resources to address a newly identified risk?

If the answers are ambiguous, your governance framework will have no teeth—decisions will default to whoever speaks the loudest or moves fastest. In this situation, responsible AI will lose every time.

DAYS 31-60: BUILD

In the second phase, the plan’s focus shifts to building the governance infrastructure that will sustain responsible AI over the long term.

1. Develop your ethical framework. Your ethical framework is the set of foundational principles that will guide every AI decision your organization makes, including the ones the policy hasn’t anticipated yet. It should address your commitments around fairness and nondiscrimination, your position on human oversight and the circumstances under which autonomous AI decision-making is and is not acceptable, your approach to employee impact and workforce augmentation, and your stance on the broader societal effects of AI.

2. Begin building the technical architecture. Governance policies without technical infrastructure are just words. Start putting in place the monitoring and data collection processes that your ethical framework needs to become an operational reality: the ability to track what your AI systems are doing, to detect drift and bias, and to produce the evidence your governance reviews will rely on. This work will not be complete by day 60, but the foundations need to be laid.

3. Establish ownership and structure. If responsible AI is a side responsibility bolted onto someone’s existing role, it will always lose out to the part of their job that is used to assess their success. Someone needs to own responsible AI and governance as an intrinsic part of their actual job. Your organization needs a dedicated person or team with both an enterprise-wide view and the authority to enforce the relevant policies. You’ll also need people in each business unit with the responsibility and authority necessary to turn principles into practical governance on the ground.

4. Design your assessment process. Build a structured, repeatable process for evaluating AI systems against your ethical framework. The assessment should produce a clear risk profile for each system, with defined thresholds that trigger different levels of governance review. Not every AI system needs board-level oversight, but you need a mechanism for determining which ones do, and that mechanism needs to be consistent, documented, and enforceable.

5. Realign incentives. People do what they’re rewarded for. If every incentive in your organization points to the importance of speed and cost reduction above all else, responsible AI will be treated as a source of friction—something to route around rather than a necessary part of the work. Tie a portion of leadership evaluation to responsible AI metrics: risk incidents identified and addressed, governance reviews completed, willingness to halt or modify deployments that don’t meet standards.

6. Begin reviews on your highest-risk systems. As soon as you have your ethical framework and assessment process in workable shape, run your first reviews on the systems that your risk inventory identified as the most exposed. You get two things out of this: real findings about your most urgent risks and an early read on whether the governance infrastructure actually works under pressure.

7. Build your skill development plan. Responsible AI requires capabilities most organizations do not yet have. Your leadership needs to understand AI risk well enough to govern it. Your technical teams need bias detection and human-centered design skills. Your frontline managers need to understand how AI is changing the work their teams do. Your legal and compliance teams need to understand the rapidly evolving regulatory landscape. Design a targeted development program that addresses the most critical gaps and then build its implementation into the governance cadence.

DAYS 61-90: EMBED

In the last 30-day stretch, the focus shifts to ensuring the system survives contact with the day-to-day pressures of running an organization.

1. Build exit plans. Every AI system in your portfolio should have a defined exit pathway, documented and owned, that shows how to safely shut it down. These are the exit protocols of the CARE framework, and they must to be put in place before you need them. The time to design a shutdown procedure is not in the middle of a crisis.

2. Establish the governance rhythm. Set up a regular meeting with an outline agenda for monitoring and responding to responsible AI issues. This creates a protected space on the calendar for reviewing the risk landscape, surfacing emerging issues, and assessing the health of your governance processes.

3. Embed governance into operations. Responsible AI cannot live as a separate process that runs alongside normal operations—it needs to be woven into them. Every new AI system above a defined risk threshold requires a governance review before deployment. Every existing system requires periodic reassessment. No exceptions. This is where responsible AI stops being a project and starts becoming part of how you operate.

4. Iterate. By day 90, you have live data—use it. Where are the bottlenecks? What’s working well and what isn’t? Is the culture shifting or is it stuck in place? The aim here is to learn from everything you’ve done so far and use these learnings to iterate the next version of your governance engine.

CONCLUSION

Claude Mythos is not an anomaly. It’s a preview of the kind of dangerous capabilities AI models will bring with them in the future. The question is not whether your organization will be affected by AI systems of this power. It will. Rather, the question is whether you will have the governance infrastructure in place when they arrive. Any organization can take significant steps toward putting this infrastructure in place in a single quarter. There’s no excuse for not starting today.

[Photo: Who is Danny/Adobe Stock]

Original article @ Fast Company

The post Here’s how to jump-start your company’s responsible AI governance in 90 days appeared first on Faisal Hoque.

 •  0 comments  •  flag
Share on Twitter
Published on April 30, 2026 07:11

April 22, 2026

Only Dead Things Stay the Same

Most of us are living in the waiting room of our own lives—convinced that peace is one promotion, one milestone away. It isn’t. Stop waiting for perfect. Start living.
KEY POINTSWe suffer from the belief that life should be perfect.Wabi-sabi teaches that flaws are not obstacles to beauty—they’re the source of it.Accepting imperfection does not mean you stop caring about quality.

In December 2006, my late friend and mentor Ito San took me to rural Japan for the first time. Our journey began in Yokohama and wound its way to the foot of the Japanese Alps, to an old ryokan called Yarimi-kan in the Shin Hotaka hot spring area of Gifu prefecture, nestled between Mt. Yarigatake, Mt. Hotaka, and Mt. Kasagatake.

“Come,” said Ito San. “It is time for kaiseki.”

I had heard of kaiseki before—a traditional multi-course meal based on the seasons. I had even had some kaiseki meals in the U.S. But this was my first true experience of kaiseki, and it baffled me.

The ceramics the meal was served on were imperfect—here a glaze that pooled unevenly, there a rim that wasn’t quite round. The ingredients were barely manipulated by the team of cooks, presented in way that was close to their natural state rather than being polished into something impressive. And the entire experience unfolded slowly, quietly, without any attempt to dazzle.

By every standard I understood, this meal should have felt unfinished and unimpressive. Instead, it was one of the most beautiful experiences of my life. I could not explain why.

The Trap We Live In

For most of us, most of the time, life is always just about to happen. When I get my dream job, when I have a million dollars in the bank, when I find the love of my life, then everything will be good and life can begin.

But that moment rarely comes. Or more precisely, it comes, but we find that it does not bring the peace or the joy that we dreamt of. Even after the promotion, after the windfall, after the wedding, life continues to move, to demand, to surprise, and, yes, to break. Life never graduates to a settled state, because that is the nature of life; indeed, it is perhaps the best definition we could have of being alive. Only dead things remain as they were, things that are alive and vital are in a constant process of becoming.

And we don’t just want things to be good; we want to know that they’ll stay good. We want the ground to stop shifting beneath us. But it never does, because being alive means not standing still.

And in the meantime, we suffer—and not just from the fact that life is always imperfect and unsettled but, even more, from our belief that it shouldn’t be.

The Word I Was Missing

The philosophy that kaiseki is built on has a name: wabi-sabi. It is a worldview rooted in three principles: nothing is perfect, nothing is permanent, nothing is complete. And rather than seeing this as a cause for complaint, wabi-sabi sees this as a reason for joy. It is a philosophy that celebrates the beauty in impermanence and imperfection.

The kaiseki meal that Ito San gave me expressed this beautifully. The imperfect plates, the seasonal ingredients that would be gone in days—the meal was a celebration of what is, rather than a longing for what we think should be. And over the years, I began to understand that this is what had moved me so much about that meal: It was a living demonstration that imperfection and impermanence are not obstacles to beauty but rather its source.

Modern psychology increasingly supports what wabi-sabi teaches. Research on self-compassion by Kristin Neff has shown that people who respond to their own imperfection with warmth rather than harsh judgment report lower anxiety, less depression, and greater well-being. Wabi-sabi is, in a sense, self-compassion made into a worldview. It says, You are imperfect, your life is unsettled – and neither of those things needs fixing.

What Ito San Taught Me

As someone who has spent decades building companies, I used to treat imperfection as the enemy, something to be eliminated before I could move forward. Every flaw in a product or a strategy felt like something that needed fixing before it was ready for the world. And as in work, so in life; every stage of life felt like a rough draft of the version that was coming next.

That meal with Ito San at the foot of the Japanese Alps didn’t make me stop caring about quality. But—eventually— it changed what I was waiting for. I stopped requiring everything to be right before I could act, and I stopped requiring perfection before I could feel at peace.

And now, almost 20 years later, I see clearly that the most meaningful work I’ve done—as an entrepreneur, as a writer, as a father—has come not from having everything figured out but from moving forward within conditions that were imperfect and incomplete.

3 Practices for Embracing ImperfectionNotice beauty in something unfinished. Once a day, pause to notice something imperfect that moves you—a conversation that didn’t resolve neatly, a project still taking shape, a season that’s already turning. Notice it the way you’d notice a cracked bowl in a kaiseki meal: not as a problem but as something with character.Let something be good enough. Choose one task this week and release it before it’s perfect. Send the draft. Ship the idea. Notice that the world doesn’t end—and that something unpolished can still be valuable.Name what you’re waiting for. Write down the condition you’ve set for your own peace: “I’ll feel settled when _____.” Then ask yourself whether you’re postponing a life that’s already here.Get Busy LivingFaisal Hoque

Ito San passed away some years after our journey together. The mountains still stand, the river still flows. I have returned to that first kaiseki meal many times in memory and, eventually, in my own kitchen, preparing a seven-course meal to honor what he taught me.

He never explained wabi-sabi to me in words. He didn’t need to. He served it on uneven plates, in a wooden inn that creaked with age, beside a river that was never the same from moment to moment. The lesson was the experience itself. There is beauty in a life that is unsettled and imperfect.

As Andy Dufresne almost said in The Shawshank Redemption: Get busy living, or get busy waiting.

[Feature Photo Source: Nito/Adobe Stock]

Original article @ Psychology Today .

The post Only Dead Things Stay the Same appeared first on Faisal Hoque.

 •  0 comments  •  flag
Share on Twitter
Published on April 22, 2026 11:53

April 21, 2026

Here’s how to jump-start your company’s AI transformation in 90 days

Fast company logoThis step-by-step road map will guide you through an essential technological upgrade.

AI is already transforming how organizations operate, compete, and create value, and adoption is accelerating across industries. Businesses that are experimenting with AI and learning how to move from initial ideas to deployment are building the infrastructure to deliver value both now and in the future. Those that are waiting—for the technology to mature, for conditions to stabilize, for someone else to figure it out first—risk finding that a competitor has upended their market before they’ve even begun adapting to this new era.

Given the current chaos in global markets and geopolitics, the temptation to avoid change and pursue a defensive strategy is strong. But it must be resisted. Organizations that gain ground now will be difficult to catch. Those that fall behind may find that they’ve missed their chance for good. A strong innovation pipeline is more important today than ever before.

THE FOUR PILLARS

Innovation efforts fail for predictable reasons that rarely have anything to do with the quality of available ideas or technology. Instead, these efforts fail because the conditions needed to sustain them aren’t in place. Those conditions rest on four pillars.

Leadership mindset. The biggest barrier to innovation isn’t budget or technological capability—it’s the mindset of the leadership team. Leaders tend to look for stability and certainty when making decisions. Yet instability and change are the default conditions of the AI age, and senior executives need to adapt to this new reality.

Organizational design. This is the pillar that gets overlooked most often, and it’s the one that most frequently determines whether innovation programs succeed or fail. If a transformational approach to innovation is to take root, then reporting lines must value experimentation over immediate returns and incentive structures must not punish risk-taking. Decision rights must also be unambiguous across the organization.

Capital allocation. Innovating through uncertainty doesn’t necessarily mean spending more, but it does mean spending deliberately—starting from strategic priorities and then holding every dollar accountable through a stage-gated approach to funding that rewards progress and kills inertia.

The innovation pipeline. Most organizations have a portfolio problem, not an idea problem. They either bet everything on one transformative initiative or they scatter resources across dozens of underfunded experiments. What they need instead is a balanced portfolio of bets with explicit mechanisms for deciding what moves forward.

These four foundations are the basis for every phase of the 90-day plan that follows.

THE 90-DAY PLANDAYS 1-30: DIAGNOSE

The goal of this phase is clarity. You can’t build what you can’t see, and most organizations have never taken a systematic look at where they stand when it comes to their capacity for innovation.

1. Start with purpose, not technology. The most common mistake in AI adoption is asking What can AI do? instead of What problems are preventing us from achieving our strategic goals, and could AI help solve any of them? Begin by reaffirming your organizational purpose, mapping your strategic priorities, and then working backward to identify bottlenecks and points of friction. At this point, you can start thinking about how AI might solve these problems. The output of this step is a ranked list of opportunities grounded in what your organization is trying to achieve.

This step follows the Outline phase of  the OPEN framework for AI innovation .

2. Diagnose your organization’s relationship to change. Culture is the primary barrier to AI transformation, so before you invest in tools, assess the environment those tools will be deployed into. Determine whether your organization treats AI as a legitimate business tool or a sign of questionable practice, and whether your frontline teams have ideas for AI applications that nobody has thought to ask them about. This diagnostic will tell you whether your culture will support what you’re about to build. If the answers you get are uncomfortable, solving these cultural issues must be a priority.

3. Audit your current innovation activity and spend. Inventory every experiment, pilot, and stalled initiative, then map every innovation dollar being spent. Flag anything that’s being funded out of inertia rather than strategy—the endlessly “almost ready” pilot or the expensive software subscription that nobody uses. This audit gives you two things: a clear picture of what you’re doing and spending right now, and the raw material for the capital allocation decisions that come later.

For a detailed example of a thorough baseline assessment, see our  practical playbook for AI innovation pipelines .

4. Map your decision rights. This is where organizational design enters the picture. You need clear answers to five questions: Who approves a new experiment? Who decides when a project advances to the next stage? Who can reallocate budget between initiatives? Who has the authority to kill a failing project? And who can request resources across functions? If the answers are ambiguous—and in most organizations they will be—then every innovation project you launch is structurally vulnerable. An accurate decision rights matrix is the structural precondition for everything that follows.

DAYS 31-50: ORGANIZE

The ground is mapped. Now it’s time to build the machinery that will sustain innovation over the long term. This phase is about putting the human and organizational infrastructure in place so when experiments begin, they have the support they need to succeed.

1. Establish ownership and structure. Someone needs to own the innovation pipeline, with explicit authority to allocate resources, kill underperforming projects, and escalate blockers. The best approach is to create a central point of accountability that sets standards and can take a unified view of the portfolio paired with leads embedded in each function or business unit who own execution in their area.

2. Realign incentives. If managers’ performance is measured exclusively on quarterly delivery and cost efficiency, they will never protect innovation spend. Changing behavior means changing incentives. Tie a part of the leadership evaluation to innovation pipeline health. Measure active experiments running, the speed at which failing projects are identified and killed, stage-gate throughput, and willingness to support experimentation. The organizational culture you diagnosed in Phase 1 won’t change because you’ve drawn a new org chart. It will change because you’ve changed what the organization values.

3. Establish a recurring innovation rhythm. Set up a weekly session: 30 minutes, with nonnegotiable attendance from senior leadership. That forces forward-looking thinking into the calendar. When uncertainty dominates, every meeting becomes reactive. This weekly session is a counterweight to that tendency, a protected space for reviewing emerging opportunities that builds the habit of treating innovation as an ongoing discipline.

DAYS 51-70: PREPARE

The infrastructure is in place. Now build your portfolio of AI initiatives and get your first cohort of projects ready to launch.

1. Structure your opportunities as a portfolio. Take the ranked list of opportunities from the Diagnose phase and score them against five criteria: strategic alignment, feasibility, cost, risk, and potential value. The core principle is straightforward: Treat your AI initiatives as an interconnected investment portfolio, not a to-do list. You want a deliberate mix across time horizons—quick wins that can demonstrate value within weeks alongside medium-term projects that require deeper integration and longer-term bets that could reshape the business.

For the detailed stage-gate portfolio methodology, see this  framework for managing AI innovation .

2. Determine and allocate capital on a purpose-driven basis. Start from your strategic priorities and work forward. Ask what level of investment your innovation portfolio actually requires to deliver against your goals. That number may be more than is currently budgeted. If so, confront that gap honestly. The question isn’t How do we spread what we have across these projects? It’s What do we need to invest to remain competitive, and how do we fund that? Once the total budget is determined, allocate it across the portfolio with clear stage gates. No project gets its next tranche of funding without hitting defined milestones.

3. Design your first cohort of experiments. Select the three to five highest-scoring quick wins from the portfolio and design each one for launch. Every experiment needs an owner, a budget, a timeline, a success metric, and a defined milestone at which it will be evaluated. Structure each experiment as a learning journey—you’re testing not just whether the technology works but whether people adopt it and what organizational capabilities it requires.

DAYS 71-90: IGNITE

The machinery is built. The portfolio is structured. Now light the fuse and lock in the governance processes that will enable the system to sustain itself.

1. Launch your first experiments. Get your first cohort of bounded, measurable AI experiments into motion. Early visible results are the single most powerful antidote to organizational paralysis—they demonstrate that innovation is possible, that AI can deliver tangible value, and that the system you’ve built actually works. Pay attention to what breaks. That information is as valuable as the experiment results themselves.

2. Establish mechanisms for long-term operation. As you launch your first experiments, establish the governance structures that will sustain the pipeline beyond the initial 90-day push. First, innovation pipeline health becomes a standing agenda item in senior leadership meetings—not an annual innovation day or a quarterly update buried in a strategy deck. Second, stage-gate discipline is fully embedded: Every innovation investment requires a gate review at each stage—no exceptions. Third, design and schedule the quarterly portfolio review process that will become the engine for continuous improvement.

3. Stress-test your organizational design. Take stock of what the first 70 days have taught you about your own structure. Are the decision rights working in practice, or are decisions still getting stuck? Is the ownership model delivering both coherence and relevance, or has it become a bottleneck? Are the incentive changes having the intended effect on behavior? Is the culture beginning to shift, or are the old defaults reasserting themselves? This isn’t a onetime fix. It’s a system that needs continuous tuning. But by day 90, the system should exist and be operating, even if imperfectly.

4. Begin the second cycle. In your quarterly review, ask: What worked? What didn’t? What scales? What gets killed? What have we learned about our organization’s capacity to support AI innovation? Feed the answers back into the pipeline. Replenish the portfolio with fresh opportunities. Assess whether the balance across time horizons is right. The 90 days were the ignition. This first review is the beginning of the ongoing engine.

CONCLUSION

This plan won’t transform your organization in 90 days. But it will jump-start its journey toward AI transformation. At the end of the period, you’ll have a functioning innovation pipeline and a portfolio of AI initiatives, your first experiments will be in motion, and the governance machinery to sustain continuous innovation will be in place. The 90-day plan is just the key to the ignition. After that, the road is all yours.

[Photo: knssr/Adobe Stock]

Original article @ Fast Company

The post Here’s how to jump-start your company’s AI transformation in 90 days appeared first on Faisal Hoque.

 •  0 comments  •  flag
Share on Twitter
Published on April 21, 2026 07:42

April 14, 2026

The cost of hollowing out human accountability

Fast company logo20 seconds to approve a military strike; 1.2 seconds to deny a health insurance claim. The human is in the AI loop. Humanity is not.

In the first 24 hours of the war with Iran, the United States struck a thousand targets. By the end of the week, the total exceeded 3,000—twice as many as in the “shock and awe” phase of the 2003 invasion of Iraq, according to Pete Hegseth. This unprecedented number of strikes was made possible by artificial intelligence. The U.S. Central Command (Centcom) insists that humans remain in the loop on every targeting decision, and that the AI is there to help them to make “smarter decisions faster.” But exactly what role humans can play when the systems are operating at this pace is unclear.

Israel’s use of AI-enabled targeting in its war on Hamas may offer some insights. An investigation last year reported that the Israeli military had deployed an AI system called Lavender to identify suspected militants in Gaza. The official line is that all targeting decisions involved human assessment. But according to one of Lavender’s operators, as the humans involved came to trust the system, they limited their own checks to nothing more than confirming that the target was a male. “I would invest 20 seconds for each target,” the operator said. “I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time.”

The same pattern has already taken hold in business. In 2023, ProPublica revealed that Cigna, one of America’s largest health insurers, had deployed an algorithm to flag claims for denial. Its physicians, who were legally required to exercise their clinical judgment, signed off on the algorithm’s decisions in batches, spending an average of 1.2 seconds on each case. One doctor denied more than 60,000 claims in a single month. “We literally click and submit,” a former Cigna doctor said. “It takes all of 10 seconds to do 50 at a time.”

Twenty seconds to approve a strike; 1.2 seconds to deny a claim. The human is in the loop. Humanity is not.

DIFFICULTY BY DESIGN

The novelist Milan Kundera writes of the terrifying weight of being confronted with the enduring seriousness of our actions. But while lightness might seem attractive in the face of this impossibly heavy burden, it is ultimately unbearable. Disconnection from the weightiness of our decisions deprives them of substance, of meaning.

AI promises to lift the burden of difficult and cognitively demanding work—it makes the work lighter. Decisions become quicker and easier. In many domains, that is genuine progress. But some decisions are important enough that we ought to feel their weight. It ought to take time to decide to kill a person or deny a healthcare claim. It ought to be difficult to figure out which buildings to bomb. In such decisions, the difficulty serves a function—it is a feature, not a bug. It is a mechanism that forces institutions to reckon with what they are doing. When AI removes that weight, the institution doesn’t become more efficient. It becomes numb. When AI takes away the burden of making decisions about who lives and who dies, this is not progress. This is moral degradation.

If the human in the loop is spending mere seconds on each decision, then the question of whether the system is autonomous or human-supervised becomes largely semantic. We need to insist on humanity in the loop as well. In cases like these, the human must be allowed to be human, even if that means they are slower, less accurate, and less efficient. That is the cost we pay for something absolutely necessary: We need the human to feel the weight of the decisions they are making, because difficulty creates the friction that makes people pause, question, and push back.

INSTITUTIONAL CULTURE

When hard decisions become easy, the institution itself changes. People stop questioning because there is nothing that feels worth questioning—the system has already decided, and the human’s role is to confirm. Dissent drops because dissent requires friction, and friction has been engineered out. Accountability is undermined because everyone knows that it’s the computer that’s making the decisions.

The Cigna physician who denied 60,000 claims in a month was not cruel. She had been placed in a system where denying a claim required no more effort than clicking a button. The system did something more insidious than corrupt her judgment—it made it unnecessary. That is why the Cigna case is not a story about a single bad actor. Rather, it is a story about what happens to any institution that systematically engineers the weight out of its hardest decisions.

THE COST OF HOLLOWING OUT ACCOUNTABILITY

Hollowed-out accountability has a cost that shows up in three places for businesses.

First, liability. An algorithm cannot be sued, fired, or held responsible for its errors. The organization that deployed it can. Rubber-stamp oversight is not a legal gray area—it is a liability waiting for lawyers to mobilize.

Second, institutional fragility. When humans stop genuinely engaging with decisions, they stop learning from them. When the machine always seems to get things right, no one develops the kind of judgment needed to determine when it is actually wrong. Organizations that optimize humans out of their decision loops become dependent on systems they no longer fully understand. And this leads to brittleness in precisely the moments that demand resilience.

Third, trust. Customers, employees, and regulators may want to know whether an AI made a decision. But they will definitely want to know if anyone is truly responsible for it. The answer, in too many organizations, is no, and that answer has deep consequences for the organization’s relationships with those it is answerable to.

THE WEIGHT TEST

Before using AI to make any decision process easier, leaders should ask four questions.

1. What institutional behaviors does the current difficulty of this decision produce—e.g., scrutiny, escalation, dissent—and what is the cost of losing them?

2. If something goes wrong, can we identify someone who wrestled with the decision—or only someone who clicked approve?

3. How would we know if the humans in this process have become rubber stamps? What would we measure, and are we measuring it?

4. If the people affected by this decision learned exactly how it was made and how long the human spent on it, would the institution be comfortable defending that process in public?

These questions won’t appear in any AI vendor’s implementation checklist. That is precisely why they matter.

CONCLUSION

We are told that AI liberates us—from drudgery, from slow processes, from the burden of hard decisions. And often it does. But not every burden is a problem to be solved. Sometimes, the burdens are the point. The weight a commander should feel before authorizing a strike, the effort a physician expends before denying care—these are not inefficiencies to be optimized away. They are the mechanisms that keep institutions honest about the power they exercise.

Of course, organizations that engineer that weight away will be faster and lighter. For a while, it may even appear like they are winning. But these organizations will also be the ones that discover, too late, that the difficulty was the price of being the one who decides—and the moment an organization stops paying it, it has no business deciding at all.

[Source Illustration: Freepik]

Original article @ Fast Company

The post The cost of hollowing out human accountability appeared first on Faisal Hoque.

 •  0 comments  •  flag
Share on Twitter
Published on April 14, 2026 15:23

April 12, 2026

Your Next Chapter in the Age of AI

Six big things are disappearing from your life—and what to do about them.
KEY POINTSAI is eroding human capacities – effort, attention, judgment, agency – often in ways we mistake for progress. The losses are subtle, and that’s what makes them dangerous. Small, deliberate acts can reverse the drift before it becomes permanent.

AI is making life easier, faster, and more convenient. And much of that is genuine progress. But quietly, almost imperceptibly, some important things are disappearing from our lives – things we didn’t realize we depended on until they started to fade away. Over the past year, I’ve been exploring these losses in this column, and a pattern has emerged. The things we’re losing aren’t dramatic. They’re subtle. They’re the kind of things you don’t miss until one day you reach for them and find they’re gone.

Here are six of the things we’re losing, and six steps you can take to get them back.

1. The Illusion of Ease

We tend to think of ease as an unqualified good. But this framing hides something important: some forms of difficulty aren’t obstacles to the life we want. They’re part of how that life gets built.

When I was growing up in Bangladesh, getting hold of a Simon and Garfunkel cassette required real effort. That effort made the music matter to me in a way it wouldn’t have if I’d simply asked Siri to play it. The investment created the attachment. This principle applies far beyond music. Effort is how we turn information into knowledge, how we turn events into the experiences that shape us.

AI is removing friction from our lives at an extraordinary rate, and much of that removal is welcome. But we should ask of any type of friction whether there is something we need to preserve there. When friction is purely procedural, eliminate it gladly. But when it’s the kind that teaches you to notice, to remember, to care, then removing it doesn’t save you effort. It costs you growth.

Before reaching for the easiest path, pause. Ask whether the struggle you’re about to bypass is the kind that builds something in you. If it is, it’s not a cost. It’s the whole point.

2. The Collapse of Attention

As the world speeds up, something precious is collapsing: our ability to be fully present. We race through days packed with competing demands, multitasking our way through meetings and meals, telling ourselves we’re being productive. But the research is clear: multitasking makes us worse at everything we’re trying to do.

The deeper loss, though, isn’t cognitive – it’s human. When we hurry through life, we stop being present to ourselves and to the people we care about. We live reactively rather than intentionally, letting the world dictate its pace instead of choosing our own.

This week, try single-tasking: choose one thing and give it your complete attention. Mute the notifications. Protect a window of time that belongs to you and not to your inbox. The world will keep accelerating. But you get to choose the speed at which you move through it.

3. The Loss of Self

We tell ourselves stories to live, as Joan Didion wrote. We make sense of our experiences by shaping them into narratives. In doing so, we don’t just describe our lives. We create them. The struggle to find the right word for what we feel is part of how we come to understand what we feel.

Increasingly, we’re handing that work to AI. We ask it to draft our emails, polish our reflections, and sharpen our descriptions of things that matter to us. The AI-written version may read better by any conventional standard. But it’s not the record of a particular person making sense of a particular life. Storytelling is more than communication – it’s self-creation. When AI shapes our stories, we don’t just lose words. We lose the process through which we become ourselves.

Not everything you write is identity work. A status update to your team is a task, so use whatever tools help. But when you’re writing about your relationships, your struggles, or to someone you love, that’s a story. Write the messy draft first. Your narrative is too precious to hand over to a machine.

4. The Comfort Trap

We knew social media was damaging our children. We knew algorithms were feeding us rage because rage keeps us scrolling. We knew, and yet we did almost nothing. By the time we finally acted, the damage ran deep, and the fixes came too late.

This is what happens when we confuse tolerance with acceptance. Tolerance feels like patience. But too often it’s just avoidance, wearing a respectable mask. True acceptance means looking at reality unflinchingly and engaging with it, even when what you see is something you’d rather not deal with.

We are repeating this pattern with AI. We see the risks and scroll past the headlines, assuming someone smarter is handling it. But acceptance need not give way to resignation. Viewed properly, accepting how things really are gives us more agency, not less. Ask yourself: What am I hoping will just go away? Once you’ve named it, it becomes much harder to keep pretending you don’t see it.

5. The Crisis of Judgment

AI can draft your emails, analyze your data, and schedule your meetings. It promises liberation from the management burden of daily life – and that promise is real. But as AI takes over more of the doing, it increases the burden of deciding. After all, someone still has to choose what’s worth doing in the first place.

Peter Drucker drew a useful distinction here: efficiency is doing things right; effectiveness is doing the right things. AI makes us dramatically more efficient. But effectiveness – knowing what matters, what to prioritize, where your uniquely human insight is needed – remains stubbornly in our hands. And when you can do ten times as much, the cost of doing the wrong things multiplies accordingly.

The danger is that we stop trusting ourselves to make these calls. We defer to the tool, or we simply let the pace of output substitute for the harder work of reflection. Start building the muscle instead: at the end of each day, ask yourself what you chose to do, what you delegated, and why. The goal isn’t to reject AI’s help. It’s to make sure you’re leading yourself, not just keeping up.

6. Reclaiming Agency

Perhaps the deepest loss of all is the erosion of our capacity to act. When everything is uncertain – when AI is reshaping work, when the ground keeps shifting – it’s natural to freeze. The sheer volume of change can make us feel powerless, and powerlessness breeds passivity.

But agency doesn’t require certainty. It requires grounding. And one of the most powerful ways to rebuild that ground is to return to the things that have sustained us before – the songs, the books, the practices, the people who gave us strength in the past. Return isn’t regression. It’s bringing who you’ve become to what you thought you knew, and discovering that it can still give you what you need.

Recently, I listened to “The Boxer” by Simon and Garfunkel. It was one of my favorite songs growing up, and I’ve heard it thousands of times. But listening in my fifties, I heard something new – a tenderness for the young man I once was, and in that tenderness, a source of strength. The song hadn’t changed. I had. And it gave me exactly what I needed for the present moment.

When the world feels paralyzing, don’t only look forward. Look back. Return to one song, one book, one practice that marked a turning point. The resources you need to face what’s ahead may already be part of your story.

Your Next Chapter

These six losses share a common thread: none of them announce themselves. They arrive silently, disguised as progress. And that is precisely what makes them dangerous. By the time you notice what’s gone, the habits that once sustained you have already started to fade.

But the reverse is also true. Each of these losses can be reversed through small, deliberate acts: writing in your own voice, slowing down for one hour, asking yourself what you’re tolerating that deserves a real response. You don’t have to take all six steps at once. Pick one. Start there.

AI will keep advancing. The world will keep accelerating. But the most important question of this era isn’t what machines can do for us. It is “What are we willing to keep doing for ourselves?”

[Source: EDA/Adobe Stock]

Original article @ Psychology Today.

The post Your Next Chapter in the Age of AI appeared first on Faisal Hoque.

 •  0 comments  •  flag
Share on Twitter
Published on April 12, 2026 06:25

April 4, 2026

The China exposure every CEO must address

Fast company logoThe supply chains, innovation shifts, and geopolitical moves that are rewriting the rules for every Western company.

Most Western executives think their exposure to China begins and ends with the question of whether they buy from or sell to Chinese companies. They are wrong. China’s capacity for innovation, its manufacturing dominance, and its geopolitical influence are changing the competitive landscape that all businesses operate in. Even when Chinese companies aren’t swimming in your part of the ocean, the country’s policies and priorities have a direct impact on the water.

The facts are undeniable. The research institute Rand Corp. estimates that Chinese AI models now operate at one-sixth to one-fourth the cost of comparable American systems, and a U.S. advisory commission warnedthis week that Chinese AI now dominates global open-source usage rankings. But artificial intelligence is only one expression of a broader shift. The same country that is closing the AI gap also manufactures over 80% of the world’s batteries, builds more commercial ship tonnage in a single yearthan America has since World War II, and is rapidly becoming the partner of choice for countries looking for an alternative to an increasingly unpredictable United States.

These forces—innovation, industrial capacity, geopolitical realignment—are reshaping the operating environment for every company, including those that don’t trade with China at all. Business leaders who want to prepare their organizations for this changed world need to start by understanding these three fundamental forces and their effects.

THE INNOVATION MYTH IS DEAD

For decades, the comfortable Western narrative held that China could manufacture but couldn’t innovate—that it could copy but never create. That narrative is false, and has been for a while.

In AI, Chinese models have moved from trailing U.S. frontier systems by double digits on standard benchmarks to near-parity, and they deliver these results at a fraction of the cost. Lee Kai-fu, founder of the Beijing startup 01.AI, told Reuters the gap had narrowed to three months in some core technologies, and that China was now ahead in certain areas. Naturedescribes the Kimi K2 model by Moonshot AI as “another DeepSeek moment,” matching or surpassing some Western rivals on specific tasks.

When it comes to electric vehicles, the transformation is even more vivid. BYD’s Yangwang U8 is an SUV that literally floats and can park sideways like a crab. The company’s Denza Z9GT model charges from 10% to 70% in five minutes and has a range of 800 kilometers. BYD sold over 417,000 vehicles overseas in 2024, aimed for 800,000 in 2025, and ended up selling more than a million. These aren’t cheap knockoffs. They are better products at lower prices.

None of this means the old problems have disappeared. A report by the Office of the U.S. Trade Representative confirms that effective remedies for trade-secret theft remain difficult in China. Academic misconduct is real enough that Beijing itself is now moving to punish universities that fail to sanction research fraud. But here’s the point most Western leaders miss: China is so vast that it doesn’t need the whole system to be world-class. If even 20% of its innovation economy is operating at the frontier, that’s a force larger than most countries’ entire output. And the trajectory is moving in one direction.

THE SUPPLY CHAIN YOU DON’T SEE

Even executives who don’t buy from or sell to China are exposed to its industrial dominance in ways they may not understand. The United States produced around 10 ocean-going commercial vessels in 2024. China produced more than 1,000 and now controls the world’s largest merchant marine fleet. That is quite a gap, and it reflects something qualitative—control over the physical plumbing of global trade. And shipbuilding is just one example of a pattern playing out across critical infrastructure—from ports to cranes to telecommunications equipment.

The dependency runs deeper than physical infrastructure. For 19 out of 20 important strategic minerals, China is the leading refiner, with an average market share of 70%. More than 90% of battery-storage applications rely on lithium iron phosphate (LFP) batteries that are almost exclusively supplied from China. Nearly all batteries used for power grids depend on China for at least one step in the supply chain. Even firms that think they are not exposed to China often discover that the vulnerability sits a tier or two upstream.

This supply chain exposure is growing, thanks to a predictable, repeating pattern. Beijing identifies strategically important sectors and directs massive investments into these areas. Chinese manufacturers rush to compete, leading to overproduction. Global prices collapse, non-Chinese competitors can’t survive at those margins, and within a few years, China is the dominant—or only—supplier left. That is how solar went from a competitive global market to one in which China controls over 80% of every major manufacturing stage. Indeed, so extensive was Chinese investment that in August 2025, the Chinese government encouraged firms to reduce production and eliminate overcapacity, because China was on track to produce roughly twice the solar cells the world was forecast to buy in 2025.

GEOPOLITICAL SHIFTS

The third force might be the hardest for many Western business leaders to absorb: the geopolitical center of gravity is moving. The current U.S. administration has directed withdrawal from 66 international organizations, following earlier exits from the World Health Organization (WHO) and the Paris Agreement on climate change. In the resulting vacuum, countries are turning to China. Canada agreed to slash EV tariffs from 100% to 6.1%, with Prime Minister Mark Carney calling ties with China “more predictable.” When British Prime Minister Keir Starmer visited Beijing in January, Reuters described a broader “pivot to China” that was gathering pace, with investors saying Beijing could offer “predictability and certainty” when the U.S. feels more uncertain. Meanwhile, China and Iran have built a yuan-denominated trading system that sidesteps the dollar entirely—one small piece of a de-dollarization trend with implications far beyond the oil market.

Let’s be clear: This is a reluctant embrace, not an enthusiastic one. The Human Rights Watch organization documents China’s systematic denial of freedoms of expression, association, and religion. Another watchdog group, Freedom House, rates China 9 out of 100 for political rights and civil liberties, giving it a categorical “Not Free” status. Countries are being pushed into China’s arms, not jumping willingly. But perception shapes markets as much as principle does, and right now, China looks stable, predictable, and oriented toward long-term outcomes at a moment when America looks like none of these things.

WHAT A CEO NEEDS TO DO

None of these forces are within a CEO’s control. But what business leaders can do is develop strategies for navigating a world that these forces shape. Here are five things to do now.

1. Map your actual exposure. Most companies have no visibility beyond their tier-one suppliers. That means they literally cannot see where their China dependence lies. McKinsey & Co.’s supply chain risk survey found that 82% of companies were affected by the new U.S. tariffs in 2025—and many didn’t see it coming. Before you can make any strategic decision about China, you need to know where China already sits inside your business. If you can’t map it, that is the first problem to solve.

2. Use China’s own plans as forward intelligence. China’s policy announcements are the most underused source of competitive intelligence available to Western business leaders. The 2026–2030 five-year plan is not a vague aspiration document—it is a procurement directive that triggers mandatory coordination across every central ministry, provincial government, and state financial institution.  

3. Diversify at the structural level. Build a portfolio of suppliers, not a dependency on one or two; build a portfolio of markets instead of betting on one geographical region. The point is not to eliminate exposure to China, but to be intentional about spreading the risk. The companies that have thrived globally—such as Apple, Nvidia, and the NBA—haven’t decoupled from China. They have diversified around it while remaining deeply engaged.

4. Protect what is yours, but don’t close the door. If you create intellectual property of any kind, the U.S. Trade Representative’s findingsmake clear that protection in China remains difficult. Treat IP security as a core operational discipline, not a legal afterthought. China is also tightening its own trade-secret regulations—which creates both additional protections and obligations. But protection isn’t a strategy if it becomes a reason to ignore innovation happening elsewhere. The companies that reflexively reject Chinese technology because it’s Chinese will find themselves paying more for less while competitors adopt what works regardless of origin.

5. Reject the binary. The world that is forming is not one of cleanly delineated blocs. It is a world of partial bifurcation, selective interdependence, shifting regulation, and overlapping spheres of interest. Your strategy needs to operate across that reality, not pretend that it will resolve itself into something simpler.

THE BOTTOM LINE

China is not an easy partner, a trustworthy actor on intellectual property, or a country whose values most Western business leaders share. But it is the largest manufacturing economy on earth, it’s innovating at speed, and it’s filling the space that America is vacating.

You do not have to like it, but you do have to plan for it.

[Source Image: Shan/Adobe Stock]

Original article @ Fast Company

The post The China exposure every CEO must address appeared first on Faisal Hoque.

 •  0 comments  •  flag
Share on Twitter
Published on April 04, 2026 13:30

What We Lose When Nothing Is Hard

We engineer friction out of everything. But some of that friction is what turns information into skill and experience into meaning.
KEY POINTSEffort turns information into skill and experience into meaning.The distinction that matters is between effort that merely delays you and effort that develops you.In a world where things are too easy, we have to be intentional about keeping difficulty alive.

I loved Simon and Garfunkel as a teenager (still do). But feeding that love was no easy matter. Growing up in Bangladesh, it took serious acts of devotion to find ways of listening to their songs. You had to find someone who owned a cassette tape or make friends with the owner of the right shop, or wait for a friend to bring one back from abroad. And when you finally got your hands on something, you listened to it over and over, because that tape was all you had.

It’s much easier to find music nowadays. The friction has been reduced to virtually nothing. I don’t even have to go to the trouble of typing anything into Spotify; I can just speak into my phone and Siri does the rest. And this is true of much more than music. Really, it’s about every domain where effort used to be the price of experience, where friction was the cost of learning. That price, that cost, has virtually disappeared.

Think about how easy it is today to find new recipes or learn new cooking techniques; on a completely different scale, think of how easy it is to “meet” and judge potential romantic partners. You don’t even have to get out of bed. You just have to look at your phone and swipe this way and that.

Some of this is definitely progress. But let’s be careful, too. Sometimes, it’s important to pay a price for getting things.

Why Paying the Price Matters

We tend to think of effort as a cost, the unpleasant thing we endure to get what we want. This is true, but it’s leaving something important out of the picture. In addition to being a cost, effort is sometimes doing very important things for us.

First, effort makes data functional: It turns information into personal knowledge. Without effort, it’s impossible to grow skill. That’s why watching 20 YouTube videos about mathematics won’t help you learn math by itself. And it’s why Duolingo is really a video game rather than a serious effort to teach languages.

Effort also makes things meaningful. I had to work so hard to find a Simon and Garfunkel cassette that, when I got one, the effort made the songs matter to me in a way they wouldn’t have if I’d just pulled them up on Spotify. The investment created the attachment. That’s not nostalgia—it’s how we come to care about things.

Effort, in other words, is how we turn events into experience and memories. It’s how we turn information into knowledge that we can use. Without it, things pass through us; but when we put the work in, we gain skills and meaning and understanding. 

June 2025 study from Harvard and MIT confirms this picture. Participants who used AI to write essays retained less knowledge, demonstrated less originality, and engaged less deeply with the material than those who worked through the difficulty themselves.

Now, while all of this is true and important, it’s equally true and important to say that we should not romanticize effort for its own sake. Plenty of effort is just drudgery—busywork and unnecessary friction, obstacles that teach you nothing. Not every struggle is sacred. And so, rather than completely rejecting effort or always glorifying it, we need to start understanding when effort matters and when it doesn’t.

Knowing Which Struggles to Keep

The distinction is not between effort and ease but between effort that merely obstructs and effort that forms us. Some difficulties are administrative. They are the incidental burdens wrapped around an activity: typing instead of speaking, formatting a document, scheduling a meeting, hunting through menus, moving files from one place to another. These tasks may consume time and attention, but they do not normally deepen understanding, sharpen judgment, or strengthen attachment. If a tool removes them, little of value is lost.

Other difficulties are different. They are not attached to the activity from the outside; they are part of what the activity is. Writing in your own words is how you discover what you think. Working through a mathematical problem is how you learn to see structure. Cooking without exact certainty, tasting and adjusting as you go, is how you develop instinct. Here, the effort is not a barrier between you and the result. It is the process through which the result becomes yours. Remove the struggle, and you may still get an outcome, but you lose the growth, the skill, and, often, the meaning.

So the question is not: How can I make this easier? It is: What kind of difficulty is this? Does it merely delay me, or does it develop me? If the friction is only procedural, eliminate it gladly. But if the friction is what teaches you to notice, remember, or care, then it is not a cost. It is the point.

Practices to Preserve the Right Kind of Friction

Here are three practices that can help you keep the right kind of friction in your life:

Pause before reaching for the easiest answer. When something feels difficult, it is natural to want relief as quickly as possible. But not every difficulty should be removed on sight. Sometimes a brief pause can help you tell the difference between unnecessary hassle and meaningful effort. Before asking AI to draft the paragraph, try writing a few sentences yourself.Ask yourself: Is this saving me effort or saving me from growth? This is a useful question because it gets to the heart of the issue. Some tools free us from repetitive, low-value tasks, and that is a genuine benefit. But other forms of convenience protect us from exactly the kinds of struggle that build judgment, patience, competence, and self-trust. When a shortcut removes only tedium, it is probably worth taking. When it removes the part that would have stretched you, it may be costing more than it gives.Add small forms of active engagement back into daily life. The right kind of friction does not have to be dramatic. Often it comes in small, ordinary acts: taking notes instead of passively highlighting, cooking without relying completely on a recipe, rereading a challenging paragraph instead of jumping straight to a summary, walking without constant stimulation. These moments ask a bit more of us, but they also return more. They deepen attention, strengthen memory, and make experience feel more fully our own.

The goal of these practices is not to make life harder for its own sake. Rather, it is to preserve the forms of effort that help us become more present, capable, and alive.

The Work Matters

Convenience is one of modern life’s great gifts, and we should use it gratefully. But we should also be careful not to let it remove the very struggles that teach us, shape us, and help us care. The challenge, then, is not to reject ease, but to become more thoughtful about where we welcome it. And in a world increasingly designed to make everything effortless, part of wisdom may be learning which difficulties are still worth choosing.

[Source: Exnoi/Adobe Stock]

Original article @ Psychology Today.

The post What We Lose When Nothing Is Hard appeared first on Faisal Hoque.

 •  0 comments  •  flag
Share on Twitter
Published on April 04, 2026 13:01

How to lead when nobody knows what’s coming

Fast company logoThe case for calm in a world on fire.

“If you can keep your head when all about you are losing theirs and blaming it on you, yours is the world, and everything that’s in it.”


—Rudyard Kipling


Right now, CEOs are confronting a grim reality. The global trade system that has underpinned business planning is unravelling. Ships pile up in harbor, supply chains that have taken years to build are being undermined, and the diplomatic relations that hold world trade together are fraying.

The most destabilizing feature of our current situation is the uncertainty it breeds about the future. If leaders could reliably predict the next catastrophe, at least they could plan and prepare for it. But right now, the ground rules of global commerce (and global politics, but that is a separate story) are being rewritten in real time, and nobody can say where the next chapter will lead us.

The natural human response to this kind of uncertainty is twofold. We try to reduce it and we try to control it. This kind of response is very understandable. There may even be an evolutionary element that makes it natural. However, it is also precisely the wrong mindset for businesses that want to thrive in the midst of this chaos.

THE CERTAINTY TRAP

When the world becomes volatile and mysterious, we search desperately for information, for someone who can tell us what is coming. And while we’re doing that, we plan and plan and plan, as though by planning the future we can master it.

This behavior might look like diligent and responsible leadership. Yet the mindset that accompanies it is often anything but. The desire to do something . . . anything . . . to feel a sense of control over the situation comes from an absence of composure. It also often reflects an unrealistic view about the world. Sometimes, there is nothing we can do to turn disorder into order. A refusal to accept these very real limits can lead businesses into a variety of forms of self-harm.

The leader who can’t sit with not knowing will do almost anything to make the discomfort of uncertainty go away. They will commit to a plan not because it is the best option, but because having a plan feels better than having a question. And this then locks the organization prematurely into a position that will be hard to change. Options that were open are now closed off. Resources that could have been spread across multiple bets are concentrated in one place.

The leaders who navigate chaos effectively do something rather different. Instead of seeking certainty where there is none, they tolerate the discomfort. They stay in the space of not knowing without rushing to fill it. This is not a form of passivity and it is not indifference—it is the type of composure that is a precondition for surviving a world that is turned upside down anew each and every day.

CALM IS A COMPETITIVE ADVANTAGE

In a crisis, your workforce is afraid. They’re reading the same headlines you are. They’re wondering whether their roles will exist next quarter, whether the company will pivot in a direction that leaves them behind, and whether anyone at the top actually knows what’s going on. They are looking to leadership for a signal.

A leader who is visibly emotional and reactive—lurching between strategies, radiating anxiety in every town hall—doesn’t just make bad decisions. They make it impossible for anyone else to make good ones. Anxiety spirals. People stop raising problems because the boss can’t handle more bad news. They stop proposing ideas because the strategic direction changes weekly. And then they disengage and start updating their resumes.

The composed leader has a different effect. They do not pretend everything is fine—composure does not mean lying about reality. Instead, they acknowledge that things aren’t fine and that the future is uncertain—and then they show that uncertainty can be faced without panic. This allows them to see clearly and act effectively, and their steadiness also helps their people stay focused and think clearly. Rather than serving as the catalyst for an organizational anxiety spiral, the composed leader helps generate a competence spiral instead.

The advantage that composure delivers isn’t just about providing a model for your team. It is also strategic. The reactive leader overreacts to noise and is unable to stay the course. The result is resources wasted on half-executed pivots and initiatives launched and abandoned before they can deliver. The composed leader, by contrast, can absorb bad news without treating it as an emergency and can hold a strategic position long enough to know whether it is working.

In volatile environments, the ability to not react is just as, if not more, important than the ability to act quickly. This is counter-intuitive for a business world that has a striking bias towards action, but it is essential for leaders to learn this truth, as the future of their business may depend on it.

COMPOSURE IN PRACTICE

Here are three ways to bring composure into your leadership.

1. Start with yourself

Knowing that composure matters is one thing. Actually cultivating it is another — and like any meaningful capability, it requires deliberate practice. Composure isn’t only a skill directed outward; it is, first and fundamentally, an inward discipline. A mindful organization requires a mindful leader: someone who manages stress, reframes risk, and fosters the creativity and clarity that crises demand.

The good news is that cultivating inner composure doesn’t require a meditation retreat. Here is a simple technique you can practice at any point in the working day:

S — Stop what you’re doing, if only for a moment.T — Take a breath, slowly and completely.O — Observe how you feel. What are you thinking about right now, at this very moment in time?P — Proceed. Return to what you were doing—but take notice. Do you feel refreshed? Can you see what you were doing from a different perspective?

There is nothing complex about this technique, but that is precisely the point. It brings your conscious attention back to the present, giving you the chance to choose your response rather than simply react—and interrupting the fight-or-flight shortcuts that evolved for physical danger, not the pressures of leadership.

2. Don’t plan—create options instead

In stable environments, leaders build plans—and in volatile environments, fixed plans can become liabilities. The alternative is to create options—to spread risk across multiple initiatives and to keep several paths open rather than committing prematurely to one.

In practice, this means building and maintaining a diversified portfolio of initiatives—quick wins that generate immediate returns and fund the longer plays, medium-risk bets that deliver value over 12 to 18 months, and moonshots that could transform the business. Crucially, when one bet fails or the world shifts, the portfolio absorbs the shock. The organization survives because it wasn’t dependent on a single outcome.

But running a portfolio is emotionally demanding. You’re funding things that might fail. You’re watching a competitor go all-in on one bet and wondering if they’re right. Anxious leaders can’t tolerate that ambiguity. They collapse the portfolio into a single bet at the first moment of pressure, because committing feels like control, even when it’s reckless.

Composure is what allows a leader to resist that impulse—to hold the portfolio together long enough to see which bets will actually be rewarded by an uncertain world.

3. Bring your people into the process

One of the most common failures of leadership in crisis is the retreat into isolation. Under pressure, leaders narrow their circle, make decisions behind closed doors, and then announce the outcome to an organization that had no part in shaping it.

Collaboration is slow and messy, full of competing perspectives that make the path forward less clear, not more. It takes composure to tolerate that mess. But the mess is where the value is. People who helped shape the response are already prepared to execute it. Diverse perspectives surface risks that no single leader can see. And the cultural readiness that organizations need to navigate rapid change doesn’t happen after the strategy is set—it happens during the process of setting it.

Keeping people close also means keeping them informed. In uncertain environments, silence is toxic—when people don’t hear from leadership, they fill the vacuum with worst-case assumptions. The composed leader resists the twin temptations of going quiet or manufacturing false certainty. Instead, they share what they know, acknowledge what they don’t, and describe the process by which decisions will be made. Simply saying “I don’t know, but here is how we will find out” is not a weakness. In a storm, it is exactly what people need to hear.

THE LEADERSHIP THE MOMENT DEMANDS

Composure is not the absence of urgency. It is the foundation on which effective urgency is built. And this moment demands leaders who are composed—leaders who can hold steady when nobody knows what’s coming, who can keep their head when everyone around them is losing theirs.

It’s quite simple, really. The most powerful thing a leader can do in a storm is to stay calm—and then get to work.

[Source Image: Adobe Stock]

Original article @ Fast Company

The post How to lead when nobody knows what’s coming appeared first on Faisal Hoque.

 •  0 comments  •  flag
Share on Twitter
Published on April 04, 2026 12:33

April 1, 2026

The looming AI risk: Automating middle management destroys critical ethical layer

by Faisal Hoque, Pranay Sanklecha, Paul Scade

Companies seeking to automate middle management risk eliminating capabilities that algorithms cannot replace. Leaders must identify tasks requiring practical and ethical judgment.

In 2023, an investigation by ProPublica revealed that Cigna, one of America’s largest health insurers, had built a system that denied certain insurance claims without any human review of the patient files. First, an algorithm flagged mismatches between patient diagnoses and a list of approved procedures. When a patient’s doctor had ordered a procedure that was not on the approved list, these claims were routed for denial. Cigna’s medical directors – physicians employed specifically to exercise their clinical judgment by reviewing claims – signed off on the algorithm’s decisions in batches. One doctor denied over 60,000 claims in a single month. On average, physicians spent just 1.2 seconds on each case. “We literally click and submit,” one former Cigna doctor told ProPublica. “It takes all of 10 seconds to do 50 at a time.”

The goal of the system was clear. Paying humans to carefully assess and judge claims is expensive. Allowing an algorithm to make the decision instead is much faster and brings with it significant cost savings. Once the company believed the algorithm could do the job effectively, the only reason for retaining humans in the decision-making loop was regulatory compliance.

The logic behind Cigna’s system is one that many companies would happily apply to their entire middle management layer: replace the expensive humans with machines that can make the same rule-based judgments at a far lower cost. This is not a hypothetical threat. Gartner estimates that half of middle management positions could disappear at many companies by the end of this year as AI is deployed more widely. Many companies are now looking to flatten their hierarchies as they explore AI-driven automation and middle management roles already account for a growing share of white-collar layoffs.

The reason middle managers seem so vulnerable to algorithmic replacement is because their roles are often viewed through a mechanistic lens. There is a long-standing distinction in management thinking between leadership and management. On this view, leaders make choices that drive change: they set direction, define strategy, and determine what the organization should become. Managers, by contrast, are responsible for the systematic execution of those choices. Their job is to take what leadership has decided and make it happen – translating strategy into operations, coordinating resources, ensuring compliance.

Despite the controversial distinction between leadership and management, there is no doubt that many management functions are conceived of as being essentially mechanical. It is easy to think of the ideal manager as pursuing the perfectly optimal path toward a pre-determined goal. And if this is indeed how the management function works, then the conclusion seems inescapable: to the extent that algorithmic systems can execute tasks more quickly, consistently, and cheaply than humans, the human manager becomes redundant.

This view is profoundly misguided – and not merely as a matter of theory. The reduction of management to the optimally efficient performance of mechanical tasks is ethically dangerous. Moreover, it is frequently bad for business. The middle management role often involves making the kind of judgment that cannot be performed by an algorithm. These judgments carry distinctive ethical responsibilities and leaders who fail to recognize this risk hollowing out both the effectiveness and the moral integrity of their organizations.

Why management cannot be reduced to an algorithm

Many managerial decisions are not optimization problems; they are judgment problems. The view that management is a mechanical activity – one that can be performed by algorithmic systems without meaningful loss – rests on an assumption that is false. This is the assumption that the optimal execution of strategy is a fully determined process that can, in principle, be specified completely in advance. There are at least two reasons why this assumption fails, one practical and one ethical.

The practical reason is that many management decisions involve weighing considerations that cannot be measured precisely or objectively and that cannot be compared to one another using some universal scoring system. In these situations, humans use their faculty of judgment. When a manager decides how to deliver critical feedback to an employee, they are balancing a highly complex web of interrelated considerations: the employee’s need to hear the truth, their emotional state that day, the relationship the manager hopes to maintain, the signal the conversation sends to the rest of the team, and a whole host of other factors. There is no formula for making such a decision. It is not a measurement problem that better data could solve; it is intrinsic to the situation that such factors must be weighed through judgment, not calculation.

The ethical reason stems from a basic fact: management decisions frequently have human consequences. When a customer service policy affects whether a vulnerable person receives help, when a staffing decision determines whether someone keeps their job, or when an algorithmic recommendation is applied to a real human case – these are not merely operational matters. They affect the lives and welfare of real human beings.

As such, these decisions fall into the realm of ethics – and ethical choices cannot and should not be outsourced to algorithms. They must remain the privilege and the responsibility of human beings. As an IBM training manual put it nearly 50 years ago: “A computer can never be held accountable, therefore a computer must never make a management decision.” This principle extends to any decision that affects human lives.

This point applies even to those who believe, as Milton Friedman does, that a corporation should be bound only by legal constraints and some minimal, poorly defined notion of ‘ethical custom.’ Ultimately, it doesn’t matter whether you think your company has moral obligations that extend beyond the most basic – what matters is that your customers, employees, and regulators do, and they will act accordingly if you fail to meet them. The algorithm cannot anticipate every situation in which applying its logic will provoke public outrage, regulatory scrutiny, or legal liability. Human judgment is required not only to do the right thing, but also to recognize when doing the wrong thing will be costly – and those two considerations are not always distinguishable in practice.

The fact that middle management has an unavoidably ethical dimension means that it is crucial to understand the ethical obligations that come with the role.

The middle manager as ethical champion

The fact that middle management has an unavoidably ethical dimension means that it is crucial to understand the ethical obligations that come with the role.

Some of the ethical obligations that middle managers have apply to them because they apply to all humans: don’t lie, don’t steal, don’t harm others unnecessarily. These are clearly important, but they are also relatively straightforward to specify. We will simply note that middle managers have these duties, and then turn to the more interesting question of the special duties middle managers have by virtue of the position they occupy.

What are special duties, and how do they differ from the general duties of all human beings? Consider the example of a judge. While in the courtroom, she has a duty to be impartial; once she is at home, her role as a mother demands partiality and preference for the interests of her children. The special duties flow from the position itself: the role creates the duty.

Just like judges or parents, middle managers have certain special duties that arise from their specific roles. We can think of those duties in terms of three categories:

Ownership of implementation. When a middle manager applies a policy or acts on a system recommendation, they must own that choice. They cannot hide behind “I was following strategy” or “The algorithm told me to do it.” Cigna medical directors cannot escape moral responsibility by pointing to the system that generated the denials. The decision to click “approve” on a batch of 50 cases without review was their decision, and the consequences were consequences they helped bring about.The duty of judgment. Not every decision requires agonizing deliberation. Often the job is to keep the machinery running by faithfully executing the strategy that the organization’s leaders have settled on. The ethical skill lies in knowing when to execute and when to stop – recognizing when something is a routine case the policy was designed for and when it is an edge case the policy’s authors never anticipated.Serving as the organization’s conscience. Middle managers are positioned to see what strategy and systems do to people – employees, customers, and communities. They are direct witnesses to the gap between intention and impact. The middle manager who sees a problem and speaks up makes it possible for the organization to respond. The middle manager who stays silent – whether from fear, convenience, or failure to recognize the significance of what they are witnessing – allows harm to continue. They become complicit in what the organization is permitted to ignore.

The thread that connects the special duties of middle management may be summed up in one word: agency. Much more than the traditional view realizes, and perhaps much more than even many middle managers realize, effective and ethical middle management requires independent thought, judgment, and action – even when the easier option would be to do nothing.

Leaders as moral architects

Leaders must recognize that middle managers are not mechanical executors of strategy. They are co-creators of it. Middle management is the layer where abstract principles become concrete action, and where the organization’s conscience resides. If leaders design systems that treat managers as button-pushers – optimizing for speed and mechanical obedience to rules above all else – they will hollow out both the effectiveness and the ethical integrity of the organization. Instead, leaders must design systems that support and enhance the agency of middle management.

Concretely, this means:

Evaluating judgment, not just efficiency. Assess middle managers on the quality of their decisions and their willingness to surface problems, not just on throughput and compliance metrics.Granting real authority to override. Give middle managers genuine power to question and override algorithmic recommendations and make it clear that exercising this authority appropriately is something that will be valued, not punished.Protecting time for deliberation. Judgment requires time. Systems that eliminate time eliminate judgment. Build space for middle managers to pause and think, rather than designing workflows that reward only speed.Designing systems that support judgment rather than bypassing it. When implementing algorithmic tools, ask whether the system preserves the space for human judgment or engineers that space away. If middle managers are only there to click “approve,” they are not really in the loop.

When leaders fail to treat middle management in this way, change efforts are highly unlikely to succeed. Organizational change requires middle managers to do more than implement new processes – it requires them to interpret the changes needed and apply them in a thousand particular situations, to adapt the new processes to local realities as they emerge minute by minute, and to bring their teams along with them. Middle managers who are treated as cogs lose the adaptive capacity that change requires.

The pressure to automate the middle layer will only intensify as AI systems grow more capable – which makes it all the more important that leaders understand what is at stake.

The organizations that answer these questions honestly – and design accordingly – will be the ones that harness AI’s power without losing their moral compass.

Retaining judgment and ethics

AI will transform middle management – and some of that transformation is overdue. There is no reason to preserve human involvement in tasks that are genuinely mechanical, and no virtue in retaining inefficiency in the name of job preservation. But leaders must make hard-nosed distinctions between the parts of the middle management function that can be handed to algorithms and the parts that cannot. As machines take over the routine – the scheduling, the processing, the pattern-matching – what remains is precisely what matters most: the judgment that completes strategy, the contextual sensitivity that reads shifting situations, the conscience that asks whether what can be done should be done.

Every organization deploying AI must ask: are we building systems that preserve the space for judgment, or are we engineering it out? Are our middle managers genuinely in the loop, or are they simply there to absorb blame when something goes wrong? The organizations that answer these questions honestly – and design accordingly – will be the ones that harness AI’s power without losing their moral compass. The ones that don’t learn, as Cigna did, that humans in the loop are only as good as the loop allows them to be.

Original article @ IMD.

The post The looming AI risk: Automating middle management destroys critical ethical layer appeared first on Faisal Hoque.

 •  0 comments  •  flag
Share on Twitter
Published on April 01, 2026 08:48