Faisal Hoque's Blog
November 23, 2025
AI and the production of ‘bullshit’
Why policy and training programs are not enough to stop it.Stories about AI-generated fabrications in the professional world have become part of the background hum of life since generative AI hit the mainstream three years ago. Invented quotes, fake figures, and citations that lead to nonexistent research have shown up in academic publications, legal briefs, government reports, and media articles. We can often understand these events as technical failures: the AI hallucinated, someone forgot to fact-check, and an embarrassing but honest mistake became a national news story. But in some cases, they represent the tip of a much bigger iceberg—the visible portion of a much more insidious phenomenon that predates AI but that will be supercharged by it. Because in some industries, the question of whether a statement is true or false doesn’t matter much at all—what counts is whether it is persuasive.
While talking heads have tended to focus on the “post-truth moment” in politics, consultants and other “knowledge producers” have been happily treating the truth as a malleable construct for decades. If it is better for the bottom line for the data to point in one direction rather than another, someone out there will happily conduct research that has the sole goal of finding the “right” answer. Information is commonly packaged in decks and reports with the intention of supporting a client narrative or a firm’s own goals while inconvenient facts are either minimized or ignored entirely. Generative AI provides an incredibly powerful tool for supporting this kind of misdirection: Even if it is not pulling data out of thin air and inventing claims from the ground up, it can provide a dozen ways to hide the truth or to make “alternative facts” sound convincing. Wherever the appearance of rigor matters more than rigor itself, AI becomes not a liability but a competitive advantage.
Not to put too fine a point on it, many “knowledge workers” spend much of their time producing what the philosopher Harry Frankfurt calls “bullshit.”And what is “bullshit” according to Frankfurt? Its essence, he says, “is not that it is false, but it is phony.” The liar, Frankfurt explains, cares about truth, even if only negatively, since he or she wants to conceal it. The bullshitter, however, does not care at all. They may even tell the truth by accident. What matters to bullshitters isn’t accuracy but effect: how their words work on an audience, what impression they create, what their words allow them to get away with. For many individuals and firms in these industries, words in reports and slide decks are not there to describe reality or to conduct honest argumentation; they are there to do the work of the persuasive bullshitter.
Knowledge work is one of the leading providers of what the anthropologist David Graeber famously called “bullshit jobs”—jobs that involve work that even those doing it quietly suspect serves no real purpose. For decades, product vendors, analysts, and consultants have been rewarded for producing material that looks rigorous, authoritative, and data-driven—the 30-page slide deck, the glossy report, snazzy frameworks, and slick 2-by-2s. The material did not need to be good. It simply needed to look good.
And if that is the goal, if words are meant to perform rather than inform, if the aim is to produce effective bullshit rather than tell the truth, then it makes perfect sense to use AI. AI can produce bullshit better, more quickly, and in greater volume than any human being. So, when consultants and analysts turn to generative AI to help them with their reports and presentations, they are obeying the underlying logic and fundamental goals of the system in which they operate. The problem here is not that AI produces bullshit—the problem is that many in this business are willing to say whatever needs to be said to pad the bottom line.
BULLSHIT VERSUS QUALITYThe answer here is neither new policies nor training programs. These things have their places, but at best they address symptoms rather than underlying causes. If we want to address causes rather than apply Band-Aids, we have to understand what we have lost in the move to bullshit, because then we can begin figuring out how to recover it.
In Zen and the Art of Motorcycle Maintenance, Robert Pirsig uses the term “quality” to name the property that makes a good thing good. This is an intangible characteristic: It cannot be defined, but everyone knows it when they see it. You know quality when you run your hand along a well-made table and feel the seamless join between two pieces of wood; you know quality when you see that every line and curve is just as it should be. There is a quiet rightness to something that has this character, and when you see it, you glimpse what it means for something to be genuinely good.
If the institutions that are responsible for creating knowledge—not just consulting firms but universities, corporations, governments, and media platforms—were animated by a genuine sense of quality, it would be far harder for bullshit to take root.
Institutions teach values through what they reward, and we have spent decades rewarding the production of bullshit. Consultants simply do in excelsis what we have all learned to do to some degree: produce something that looks good without caring whether it really is good.
First you wear the mask, they say, and then the mask wears you. Initially, perhaps, we can produce bullshit while at least retaining our capacity to see it as bullshit. But over time, the longer we operate in the bullshit-industrial complex, the more bullshit we produce, the more we tend to lose even that capacity. We drink the Kool Aid and start thinking that bullshit is quality. AI does not cause that blindness. It simply reveals it.
WHAT LEADERS CAN DOMake life hard. Bullshit flourishes because it is easy. If we want to produce quality work, we need to take the harder road.
AI isn’t going away, and nor should we wish it away. It is an incredible tool for enhancing productivity and allowing us to do more with our time. But it often does so by encouraging us to produce bullshit, because that is the quickest and easiest path in a world that has given up on quality. The challenge is to harness AI without allowing ourselves to be beguiled into shortcuts that ultimately pull us down into the mire. To avoid that trap, leaders must take deliberate steps at both the individual and organizational levels.
At the individual level. Never accept anything that AI outputs without making it your own first. For every sentence, every fact, every claim, every reference, ask yourself: Do I stand by that? If you don’t know, you need to check the claims and think through the arguments until they truly become your own. Often, this will mean rewriting, revising, reassessing, and even flat out rejecting. And this is hard when there is an easier path available. But the fact that it is hard is what makes it necessary.
At the organizational level: Yes, we must trust our people to use AI responsibly. But—if we choose not to keep company with the bullshitters of the world—we must also commit and recommit our organizations to producing work of real quality. That means instituting real, rigorous quality checks. Leaders need to stand behind everything their team produces. They need to take responsibility and affirm that they are allowing it to pass out of the door not because it sounds good but because it really is good. Again, this is hard. It takes time and effort. It means not accepting a throwaway glance across the text but settling down to read and understand in detail. It means being prepared to challenge ourselves and to challenge our teams, not just periodically, but every day.
The path forward is not to resist AI or to romanticize slowness and inefficiency. It is to be ruthlessly honest about what we are producing and why. Every time we are tempted to let AI-generated material slide because it looks good enough, we should ask: Are we creating something of quality, or are we just adding to the pile of bullshit? That question—and our willingness to answer it honestly—will determine whether AI becomes a tool for excellence or just another engine that trades insight for appearance.
[Source photos: Jakob Berg/Getty Images; Vaceslav Romanov/Adobe Stock]
Original article @ Fast Company.
The post AI and the production of ‘bullshit’ appeared first on Faisal Hoque.
The Quiet Weight
Courage isn’t dramatic. It’s the quiet, repeated acts that help us lift what life puts in front of us, even when we don’t feel ready.KEY POINTSCourage isn’t a fixed trait; it’s a muscle that strengthens with practice. You don’t need to feel brave to be brave; action comes first, feeling follows.Everyday life offers constant opportunities for heroism.
I grew up believing courage was about heroes jumping off mountains and running into burning buildings, 300 Spartan warriors standing against the invading Persian army—the kind of thing that is celebrated in epic poems and Hollywood blockbusters.
But life has taught me something different.
Courage does not just have to look spectacular. In fact, much more often, courage lives in the everyday.
For example, in my professional journey as an entrepreneur, I’ve needed to find the courage to stay true to my vision despite the prospect of financial crisis. Or, to take an even more serious example from my personal life, I continually see courage in my son. A few years ago, at 18, he was diagnosed with multiple myeloma, a rare blood cancer that usually picks on people three times his age. Today he’s in remission and, fortunately, doing well. Still, every two weeks he goes to the infusion center for maintenance therapy. And there’s no dramatic music or inspiring speech beforehand. He just wakes up, gets ready, and goes. He does what needs to be done, and carries on—quietly, relentlessly—like it’s the most ordinary thing in the world.
Because for him, it is.
We all have a variety of things in our personal and professional lives that require courage. We may need to find the courage to disagree with our boss, or we may need it to show up to therapy every week. But whatever we need it for, life has made one thing clear to me: If we want to not just survive but also to thrive in this always challenging world, we have to intentionally develop our capacity to act with courage.
The Courage MuscleI’ve written before about hope, and how it is not simply a feeling but also an action; it is a choice to step forward even when the future is uncertain. Courage is what helps us take that step.
We often talk about courage like it is a fixed property, something that people either have or do not have: “She’s brave, he’s not brave.” But my son has shown me something different, something that fills me with hope.
Everyday, my son practices courage. Sometimes it’s easier than other times, but regardless, whatever it feels like, no matter how tired or terrified he is, every two weeks, he shows up htois cancer treatment.
And through that consistent repetition, something has shifted. It isn’t that it’s become consistently easier, because it hasn’t. Rather, it’s that he has learned he can be brave and do hard things. Showing up once meant he could show up again.
Courage is a muscle. And like any muscle, it gets stronger with use.
Building the Courage MuscleThe world’s strongest person can lift more than 1,100 pounds in a deadlift. Now, I don’t know Mr. Björnsson, but I bet he didn’t begin his strongman journey being able to lift that weight.
But that was not a problem, because he didn’t need to lift 1,100 pounds at the start of his journey. He just needed to lift the weight in front of him. Over time, the weight that was in front of him got heavier. But it was still all that he needed to do: lift the weight in front of him.
Mr. Björnsson’s journey teaches us that the next step is always smaller than we think. And so it is with courage: Small acts of courage build the capacity for larger ones. Not because you suddenly become a different person but because you’ve proven to yourself that you can take a step, and then another, and then another.
When we practice courage in small increments, we develop the courage muscle, and we develop belief in our ability to be brave. And over time, that translates into growth; the things that we are too scared to do today become the things we are brave enough to do next month.
5 Ways to Practice Daily CourageIf courage is a practice, then we need practical ways to build it. Here are five approaches I’ve learned, both from watching my son and from my own struggles to take the next step:
1. Name the Next Step, Not the Mountain. Don’t ask “How do I get through this entire ordeal?” Ask “What’s the one thing I need to do next?” My son doesn’t think about the next six months of treatment. He thinks about today’s appointment. Tomorrow can wait.
2. Recognize Your Own Courage. Start noticing when you do hard things. Said something difficult? That took courage. Showed up when you wanted to hide? Courage. Asked for help? Courage. We don’t build the muscle if we don’t acknowledge we’re using it.
3. Borrow Courage from Others. Watch for courage in the people around you. Let it inspire you, remind you what’s possible. My son’s courage doesn’t just sustain him; it sustains our whole family. We witness his bravery and remember we can be brave, too.
4. Separate Courage from Feeling Ready. You will almost never feel ready. Waiting to feel brave guarantees you’ll wait forever. The equation is simpler: Do you need to take this step? Then take it. The feeling follows the action, not the other way around.
5. Rest Between Acts of Courage. Courage is exhausting. My son doesn’t go to treatment every day; he goes every two weeks, and he goes about his life in between. Honor that you can’t be courageous every moment. Recovery is part of the practice.
We Are All CalledCourage is something we associate with heroes. And I still think that’s true. Courage is heroic. What I’ve learned, though, is that heroism is not just the province of the spectacular. It lives also in the everyday, in normal lives and normal jobs; you could say that my son has taught me that there are heroes everywhere.
But he’s also raised a challenge. You see, if ordinary life gives us opportunities for heroism, then what that means is that everyday, we are also called to be heroes.
We don’t get to choose how the call comes. My son didn’t choose to face cancer and fortnightly treatments. But he does get to choose how he responds to the call: one appointment at a time, one step at a time, building the muscle of courage through practice and repetition.
We each have that same choice.
The weight in front of us may be different, but the principle is the same: Lift what’s there, rest, and show up again.
Will we lift the weight?
Will you answer the call?
[Photo: Source: Best/Abobe Stock]
Original article @ Psychology Today.
The post The Quiet Weight appeared first on Faisal Hoque.
November 17, 2025
How to know when (and when not) to make a change
Be steadfast about your purpose, but flexible about your methods.In 2011, Patagonia faced the same pressure every retailer faces on Black Friday: maximize sales on the year’s biggest shopping day. Instead, they ran a full-page ad in the New York Times with a stark message: “Don’t Buy This Jacket.” The ad detailed the environmental cost of making their bestselling R2 fleece, such as 135 liters of water in the manufacturing process and 20 pounds of carbon dioxide for transporting it to the company’s warehouse.
This wasn’t a clever marketing ploy. The ad directly urged customers to think twice before purchasing, to fix existing gear before replacing it, and to buy and sell second-hand. This was a real commitment to the values that had driven Patagonia since Yvon Chouinard founded the company in 1973: putting environmental protection above profit maximization, even when market logic demanded otherwise. As a result of staying true to their deepest values, Patagonia today has a fiercely loyal customer base and enjoys more than $1 billion in annual revenue.
Patagonia is a striking example of a business that has thrived not by giving in to pressure to change, but rather by doubling down on their core ideas. Another such example is fast-food favorite In-N-Out Burger, which has spent 75 years refusing to franchise, expand quickly, or add more items to its menu. Unlike McDonald’s, which tried to reinvent itself as a purveyor of healthy foods a decade ago, In-N-Out has stuck to making burgers, fries, and milkshakes. The result? Cult-level customer devotion and some of the highest per-store revenues in the fast-food industry.
Patagonia and In-N-Out Burger are not dinosaurs that have failed to adapt. They are billion-dollar businesses with the kind of customer devotion and employee loyalty that money can’t buy. And a large part of the reason for this success is that they have mastered a counterintuitive truth: the more things change, the more you need to stay the same.
THE VIRTUE OF STAYING STEADFASTIn order to thrive in change, we need the ability to resist change. This ability, which I call steadfastness, is essential for two reasons.
First, organizations liberate enormous energy by committing to their identity and purpose in times of change. This commitment gives people a reason to endure change and turmoil. As the German philosopher Nietzsche famously said, those who have a why can bear almost any how. This reason, and the energy and strength it liberates, becomes a crucial resource for organizations that are confronted with change—and in our current era, that is all organizations, all the time.
Second, it gives you a fixed point by which to navigate, which is especially important in times of change. When an organization sticks to its core identity and purpose, it is better placed to understand how to respond to the changes it faces. In-N-Out’s core mission is to make great burgers, and by sticking to it, the company can navigate major changes in the business landscape by asking a simple question: how can this change help us make great burgers?
Steadfastness should not be mistaken for the kind of dangerous rigidity that can run a company into the ground. Kodak didn’t fail because it stayed true to its purpose of helping people capture and share memories—it failed because its leaders became overcommitted to the business model of maximizing profits by selling film. The critical distinction is this: be steadfast about your why (your purpose), but be flexible about your how (your methods). Patagonia’s purpose is protecting nature, not selling as many fleece jackets as possible. In-N-Out’s purpose is making great burgers, not resisting technology. When you are clear about this distinction, steadfastness becomes a source of strength rather than a path to obsolescence.
THE DUAL DEMAND OF ORGANIZATIONAL RESILIENCEUnderstanding the difference between purpose and methods points to a deeper challenge: organizations must somehow find a way to be both immovable and infinitely adaptable at the same time. They need to be unshakeable about their why while being completely willing to reinvent their how.
This is what makes building resilience so difficult. It requires holding two contrasting capabilities in perfect tension. Most organizations fail at one extreme or the other. Some chase every trend and lose their identity in the process while others cling to outdated methods and mistake their current business model for their reason for being.
But there is a group of people who have mastered implementing both sets of capabilities simultaneously: entrepreneurs. The entrepreneurial journey demands both unwavering commitment to a vision and constant willingness to change tactics. This entrepreneurial mindset—this ability to be both stubborn and adaptable—is not just for startups. It is the key to building resilience in any organization. And it rests on four specific traits that enable leaders to navigate the tension between steadfastness and flexibility.
FOUR ENTREPRENEURIAL TRAITS THAT BUILD RESILIENCE1. Love. Entrepreneurs don’t fall in love with their products—they fall in love with the problem they’re solving and the people they’re serving. This distinction is critical. When you love your product, you resist necessary changes. When you love your purpose, you’ll do whatever it takes to fulfill it. Love for purpose creates both the commitment to persist and the freedom to change. It’s what allows organizations to ask: “Does this method still serve what we love?” rather than protecting methods that have become comfortable but ineffective.
2. Embracing Suffering as Growth. Every entrepreneur knows the particular pain of watching a cherished strategy fail. But they learn to reframe suffering as a source of learning, a font of hard-won data about what doesn’t work. This reframing serves a dual purpose: it makes you quick to abandon failing tactics while simultaneously strengthening your commitment to the purpose that makes the suffering worthwhile.
3. Building Partnerships. Entrepreneurs know that the right partners don’t just share your current strategy—they share your purpose. This distinction is crucial for resilience. Partners who are only aligned on methods will abandon you when those methods fail. Partners who are aligned on purpose will help you find new methods when the old ones stop working. True partners act as both innovation catalysts (pushing you to try new approaches) and guardians of purpose (keeping you honest about your why). They give you permission to change everything except what matters most.
4. The Power of Saying No. The hardest entrepreneurial skill is saying no—both to opportunities that don’t serve your purpose and to methods that no longer work. This dual discipline is what creates resilience. Every time you say no to a profitable distraction, you strengthen your commitment to purpose. Every time you say no to a failing approach, you free resources for innovation. This discipline is what allows organizations to be both focused and agile. It’s the skill that prevents both mission drift and tactical stubbornness.
BUILDING THE RESILIENT ORGANIZATIONAt the company level, the challenge is to master the tension of being steadfast about purpose while flexible about methods. Here are three ways to build this capability:
1. Define your non-negotiables. Write down your organization’s core purpose—the why that will never change regardless of market pressure—and separate it explicitly from your current methods, products, and business models. Make this distinction visible throughout the organization so everyone understands what must be protected and what can be reimagined.
2. Distinguish purpose from method in every decision. When facing changes or challenges, explicitly ask: “Is this about our why (which we won’t compromise) or our how (which we can reimagine)?” Make this question a standard part of strategic discussions, and promote leaders who consistently make this distinction well.
3. Practice strategic steadfastness. The next time you face pressure to chase a trend or copy a competitor, pause and ask whether it serves your core purpose. If not, say no publicly and explain why. Each time you hold firm on purpose while adapting methods, you build organizational muscle memory for true resilience.
Change is inevitable. The organizations that will thrive aren’t those that resist all change or embrace every trend, but those that know exactly what to preserve and what to transform.
[Photo: negatina/Getty Images]
Original article @ Fast Company.
The post How to know when (and when not) to make a change appeared first on Faisal Hoque.
November 13, 2025
The dual challenge of AI: Innovating and building while preparing to defend

by Faisal Hoque , Pranay Sanklecha , Paul Scade
AI poses dual threats that demand balancing innovation portfolios and scanning for external disruption. Here’s how to manage both effectively.Uncertainty has become the defining characteristic of the modern business environment, and artificial intelligence represents perhaps the most significant amplifier of that uncertainty today.
Unlike previous technological shifts that unfolded over decades, AI’s capacity to transform entire value chains can render established business models obsolete in a matter of years. At the same time, the failure of an AI initiative can wipe hundreds of millions off the value of a company.
Defensive restructuring in the face of this uncertainty is already underway at many Fortune 500 companies, often framed in terms of efficiency initiatives. Organizations now face a dual imperative that traditional risk frameworks were never designed to address: they must harness AI’s potential through their own initiatives while simultaneously defending against AI-driven disruption from both established competitors and new entrants.
The organizations that harness AI effectively will define the next era of business, while those that mismanage its risks may not survive it. In this piece, drawing on research from our recent book Transcend, we explore a critical distinction that every leader must understand to manage AI risk effectively: the difference between project risks (the challenges arising from your own AI initiatives) and enterprise risks (the threats to the enterprise from the broader evolution of AI). After sketching this distinction, we go on to provide practical guidance for managing both types of AI risk effectively.

Understanding the distinction between project risk and enterprise risk is fundamental to effective AI risk management
What’s the difference between project risk and enterprise risk?Understanding the distinction between project risk and enterprise risk is fundamental to effective AI risk management. Project risk refers to the negative consequences that can arise from an organization’s own AI initiatives. These are risks the organization creates through its own implementation of AI, such as technical failures, integration challenges, user rejection, or ROI shortfalls.
Enterprise risk, on the other hand, refers to the threats posed to the business as a whole by the broader evolution and deployment of AI across industries and societies. Enterprise risks are those that emerge from what others are doing with AI, such as the development of new AI models, competitor breakthroughs, industry disruptions, regulatory shifts, or fundamental changes in how value is created and captured in your sector.
Could Wendy’s internal AI focus be threatened by Mangal’s fully autonomous model?Consider the implementation of AI assistants to take drive-thru orders by the American fast-food chain Wendy’s. This kind of project comes with a host of risks for the company implementing it. For example, the Wendy’s assistant might have accuracy issues, leading to incorrect orders and frustrated customers; the system might break during peak hours, creating operational chaos; or customers might reject the assistant, and the associated brand, because they desire more human interaction.
These are all important risks and must be identified and managed as part of Wendy’s AI innovation program. But in addition to managing the risks arising from its new drive-thru AI, Wendy’s must also identify and respond to AI risks coming from external sources.
Germany’s Mangal kebab chain, for instance, plans to launch fully autonomous restaurants in which robotic arms will prepare and plate food without the intervention of human workers. If this model succeeds and spreads, Wendy’s incremental automation of order-taking may begin to look like the company is rearranging deck chairs on the Titanic.
If Mangal’s fully autonomous model achieves dramatically lower operating costs, Wendy’s entire pricing structure and business model could become uncompetitive overnight, forcing them to either attempt a rushed, expensive pivot to full automation or risk being priced out of the market entirely. Even before this point, a Mangal-style full automation could reset customer expectations for speed and consistency across the whole fast-food sector, making Wendy’s hybrid human-AI model appear outdated and inefficient – no matter how well their drive-thru AI works.

So, how can leaders manage these two different kinds of AI risks effectively and in parallel?
Managing AI project risk: portfolio management principlesManaging AI project risk effectively requires a fundamental shift in how many organizations approach AI innovation. Treating AI initiatives in isolation often leads to their risks being conceptualized as a series of disconnected ‘go/no-go’ decisions. This approach can stifle innovation because it separates the innovation process into a series of disconnected projects. By adopting portfolio management principles that approach AI investments as a unified innovation pipeline, leaders can instead balance risk and reward profiles across the entire portfolio. This approach recognizes that some AI projects should be high-risk moonshots that could transform the business, while others should be reliable workhorses that deliver steady added value with tightly circumscribed risk levels.
Taking a holistic approach to AI innovation makes it possible to deliberately calibrate the organization’s overall risk exposure while maintaining the innovation velocity necessary to compete in an AI-driven economy. The portfolio lens transforms risk from a constraint to be minimized into a strategic variable to be optimized, enabling leaders to adopt organization-wide risk profiles that are appropriate for their specific enterprise. A startup, for instance, might aim for 70% high-risk, high-reward projects to maximize breakthrough potential. An established enterprise, on the other hand, might include fewer high-impact projects and a much greater proportion of low-risk implementations of proven solutions.
A portfolio approach can also help to set and manage risk levels across functions within a business, creating nuanced risk profiles that are both industry– specific and reflect the company’s unique position. For instance, a pharmaceutical business could ring-fence its product development and testing process to ensure that regulatory compliance acts as a go/no go gate for moving an initiative from planning to prototyping. Such a business might also decide that it has an ethical duty not to concentrate resources on initiatives that may increase the risk of product failures, even if the initiative passes regulatory muster. Yet at the same time, its leaders may decide that, once these conditions are met, the company is in a robust enough position to pursue a moderate-to-high risk strategy overall, concentrating that risk in areas such as IT Ops, fundamental research, or staff management and strategy tools. By contrast, a fast fashion company could allow initiatives to pass through the portfolio without highly rigorous regulatory gate checks while also opting for an extremely low overall risk profile to insulate its low– margin product lines from the failure of new AI systems.
The key is that a portfolio management approach allows these decisions to become conscious, strategic choices rather than accidental outcomes.
Key principles for implementing portfolio management: Set explicit portfolio targets based on strategic context. Define your desired mix of risk levels across the portfolio. This mix should reflect your competitive position, industry dynamics, and organizational risk appetite. Evaluate projects based on portfolio contribution, not just individual merit. When reviewing AI initiatives, assess not only whether the project is worth pursuing based on an internal risk/reward calculation, but also how it affects your overall portfolio risk profile. Create integrated governance systems that manage risk and innovation together. Replace separate risk and innovation review processes with unified portfolio reviews that consider both dimensions simultaneously.
Define your desired mix of risk levels across the portfolio.

While you are carefully managing your AI innovation portfolio, other companies may be building new AI capabilities that have the potential to render your entire business model obsolete
Navigating enterprise risk in the AI eraWhile you are carefully managing your AI innovation portfolio, other companies may be building new AI capabilities that have the potential to render your entire business model obsolete. The speed of AI-driven disruption means the next existential threat to your organization could come from a traditional competitor that suddenly leapfrogs you with AI-powered innovation, or from an AI-native challenger three industries away that discovers how to serve your customers better, faster, and cheaper. This is enterprise risk in the AI era: not gradual erosion of market share, but the danger of sudden strategic irrelevance akin to falling off a cliff. Organizations that fail to scan for and respond to these external AI threats will not get a second chance.
Managing AI enterprise risk effectively requires systematic environmental scanning that goes beyond tracking immediate competitors and extends instead to monitoring AI developments both in adjacent industries and across multiple dimensions. This includes paying attention to technological breakthroughs that might enable new business models, regulatory changes that could reshape competitive dynamics, shifts in consumer expectations driven by AI experiences in other industries, and the emergence of AI-native startups that bypass traditional industry barriers and disrupt the entire market.
Identifying enterprise risk early is a critical first step, but it is not enough to just spot potential dangers in advance. Businesses must also develop the tools needed to respond effectively to emerging threats.
Meeting these dual challenges requires governance structures that are designed specifically for the severity of the enterprise threats that AI may pose. For example, many organizations would benefit from establishing dedicated AI risk committees that report directly to the board, ensuring enterprise risks receive appropriate senior attention. These committees should have the authority to trigger strategic reviews when emerging threats are identified. To work effectively, they need clear escalation protocols that can rapidly mobilize resources when a potential threat moves from possibility to probability.
Finally, organizations must develop what we might call strategic optionality – keeping open multiple paths forward to support rapid pivots if severe enterprise threats materialize. This could mean experimenting with AI-enabled business models even while your traditional model remains profitable. It might involve building partnerships with potential disruptors rather than ignoring them, or developing internal AI capabilities that could become the foundation for business model transformation if needed. The goal here is not to predict exactly which enterprise risks will materialize, but rather to develop the organizational agility and capability needed to respond to an uncertain future.
Key actions for managing enterprise risk: Implement systematic environmental scanning beyond your industry. Establish quarterly PESTLE reviews (assessing Political, Economic, Social, Technological, Legal, and Environmental risks) calibrated for AI and monitor adjacent industries for spillover threats – AI advances that seem irrelevant to your sector today could reshape it tomorrow. Create board-level AI risk governance with rapid response capabilities. Establish a dedicated AI risk committee reporting directly to the board, with clear authority to trigger strategic reviews and mobilize resources when threats escalate from possibility to probability. Build strategic optionality through parallel experimentation. Develop multiple paths forward. Experiment with AI-enabled business models, partner with potential disruptors, and build internal AI capabilities that could enable rapid pivots when needed.
Experiment with AI-enabled business models, partner with potential disruptors, and build internal AI capabilities that could enable rapid pivots when needed.
Conclusion
The window for developing these capabilities is narrowing rapidly, and the cost of inaction grows steeper with each passing quarter.
The distinction between project risk and enterprise risk in AI is not merely an academic exercise in categorization – it represents a fundamental shift in how organizations must approach strategic risk management. Companies that focus solely on managing their internal AI initiatives while ignoring the broader transformation of their competitive landscape will find themselves perfecting their execution of an obsolete strategy. Conversely, those that become paralyzed by external threats while failing to build their own AI capabilities will lack the organizational competence to respond when disruption arrives.
The path forward requires the simultaneous pursuit of disciplined portfolio management for internal initiatives and the development of robust structures for not only identifying, but also rapidly and decisively responding to external threats. This dual capability will increasingly separate organizations that thrive in the AI era from those that become its casualties. The window for developing these capabilities is narrowing rapidly, and the cost of inaction grows steeper with each passing quarter.
Original article @ IMD.
The post The dual challenge of AI: Innovating and building while preparing to defend appeared first on Faisal Hoque.
The business impact of deepfakes
When nothing can be trusted, what will it cost us?On May 19, 2023, a photograph appeared on what was then still called Twitter showing smoke billowing from the Pentagon after an apparent explosion. The image quickly went viral. Within minutes, the S&P 500 dropped sharply, wiping out billions of dollars in market value. Then the truth emerged: the image was a fake, generated by AI.
The markets recovered as quickly as they had tumbled, but the event marked an important turning point: this was the first time that the stock market had been directly affected by a deepfake. It is highly unlikely to be the last. Once a fringe curiosity, the deepfake economy has grown to become a $7.5 billion market, with some predictions projecting that it will hit $38.5 billion by 2032.
Deepfakes are now everywhere, and the stock market is not the only part of the economy that is vulnerable to their impact. Those responsible for the creation of deepfakes are also targeting individual businesses, sometimes with the goal of extracting money and sometimes simply to cause damage. In a Deloitte poll published in 2024, one in four executives reported thattheir companies had been hit by deepfake incidents that targeted financial and accounting data. Lawmakers are beginning to take notice of this growing threat. On October 13, 2025, California’s Governor Gavin Newsom signed the California AI Transparency Act into law. When it was first introduced in 2024, the Act required large “frontier providers”—companies like OpenAI, Anthropic, Microsoft, Google, and X—to implement tools that made it easier for users to identify AI-generated content. This requirement has now been extended to “large online platforms”—which essentially means social media platforms—and to producers of devices that capture content.
Such legislation is important, necessary, and long overdue. But it is very far from being enough. The potential business impact of deepfakes extends far beyond what any single piece of legislation can address. If business leaders are to address these impacts, they must be alert to the danger, understand it, and take steps to limit the risks to their organizations.
HOW DEEPFAKES THREATEN BUSINESSHere are three important and interrelated ways in which deepfakes can damage businesses:
1. Direct Attacks
The primary vector for direct attacks is targeted impersonations that are designed to extract money or information. Attacks like this can cause even sophisticated operators to lose millions of dollars. For instance, U.K. engineering giant Arup lost HK$200 million (about $25 million) last year after scammers used AI-generated clones of senior executives to order money transfers. The Hong Kong police, who described the theft as one of the world’s largest deepfake scams, confirmed that fake voices and images were used in videoconferencing software to deceive an employee into making 15 transfers to multiple bank accounts outside the business.
A few months later, WPP, the world’s largest advertising company, faced a similar threat when fraudsters cloned the voice and likeness of CEO Mark Read and tried to solicit money and sensitive information from colleagues. The attempt failed, but the company confirmed that a convincing deepfake of its leader was used in the scam.
The ability to create digital stand-ins that can speak and act in a convincing way is still in its infancy, yet the capabilities available to fraudsters are already extremely powerful. Soon, it will be impossible in most cases for humans to tell that they are interacting with a deepfake solely on the basis of audible or visual cues.
2. Rising Costs of Verification
Even organizations that are never directly targeted still end up paying for the fallout. Every deepfake that circulates—whether it’s a fake CEO, a fabricated news event, or a counterfeit ad—raises the collective cost of doing business. The result is a growing burden of verification that every company must now shoulder simply to prove that its communications are real and its actions authentic.
Firms are already tightening internal security protocols in response to these threats. Gartner suggests that by 2026 around 30% of enterprises that rely on facial recognition security tools will look for alternative solutions as these forms of protection are rendered unreliable by AI-generated deepfakes. Replacing these tools with less vulnerable alternatives will require considerable investment.
Each additional verification layer—watermarks, biometric tools for detecting that an individual is a live human being, chain-of-custody logs, forensic review—adds costs, slows down decision-making, and complicates workflows. And these costs will only continue to mount as deepfake tools become more sophisticated.
3. The Trust Tax
In addition to the direct costs that accrue from countering deepfake security threats, the simple possibility that someone may use this technology erodes trust across all relationships that are grounded in digital media. And given that virtually all business relationships now rely on some form of digital communication, this means that deepfakes have the potential to erode trust across virtually all commercial relationships.
To give just one example, phone and video calls are some of the most basic and most frequent tools used in modern business communications. But if you cannot be sure that the person on the screen or on the other end of the phone is who they claim to be, then how can you trust anything they say? And if you are constantly operating in a realm of uncertainty about the trustworthiness of your communication channels, how can you work productively?
If we begin to mistrust something as basic as our daily modes of communication, the result will eventually be a broad, ambient skepticism that seeps into every relationship, both within and beyond our workplaces. This kind of doubt undermines operational efficiency, adds layers of complexity to dealmaking, and increases friction in any task that involves remote communication. This is the “trust tax”—the cost of doing business in a world where anything might be fake.
FOUR STEPS THAT COMPANIES NEED TO TAKEHere are four steps all business leaders should be taking to respond to the threat of deepfakes:
1. Verify what matters
Use cryptographic signatures for official statements, watermark executive videos, and communication channels, and use provenance tags for sensitive content. Don’t try to secure everything—focus your verification efforts where falsehoods would hurt the most.
2. Build a “source of truth” hub
Create a public verification page listing your official channels, press contacts, and authentication methods—stakeholders should know exactly where to go to confirm what’s real. If your organization relies on external information sources for rapid decision-making, ensure that these are only accessed through similarly authenticated hubs.
3. Train for the deepfake age
Run deepfake-awareness drills and build verification literacy into onboarding, media training, and client communication.
4. Treat detection tools as essential infrastructure
Invest in tools that can flag manipulated media in real time and then integrate these solutions into key workflows—finance approvals, HR interviews, investor communications. In the age of deepfakes, verification is a core operating capability.
Social media echo chambers, conspiracy theories, and “alternative facts” have been fracturing our shared sense of reality for over a decade. The rise of AI-generated content will make this unraveling of common reference points exponentially worse. An earlier generation of internet users used to say, “Pics or it didn’t happen.” Well, now we can have all the pics we like, but how are we to tell if what they show happened at all?
Business leaders cannot solve the fragmentation of perceived reality or the fracturing of communities. They cannot single-handedly restore trust in institutions or reverse the cultural forces driving this crisis. But they cananchor their own organizations’ behavior and communications in verifiable truth, and they can build systems that increase trust.
Leaders who swim against the stream in this way will not only help protect their organizations from the dangers of deepfakes. When seeing is no longer believing, these businesses will also become the beacons that people rely on to navigate through an increasingly uncertain world.
[Source Photo: Freepik]
Original article @ Fast Company.
The post The business impact of deepfakes appeared first on Faisal Hoque.
November 12, 2025
6 Things Every Government Leader Needs to Know to Implement AI Effectively
Adapted with permission from Reimagining Government by Faisal Hoque, Erik Nelson, Tom Davenport, Paul Scade, et al. (Post Hill Press, Hardcover, 2026). Copyright 2025, Shadoka, LLC & CACI International Inc. All rights reserved.
The orders from our nation’s leaders are now clear: federal agencies must adopt AI to improve efficiency and mission effectiveness, and they must do so rapidly. But for the government leaders navigating this transformation, the path forward can often feel uncertain. How do you balance innovation with accountability? How do you move quickly while maintaining public trust?
For our forthcoming book – Reimagining Government: Achieving the Promise of AI – my co-authors and I conducted extensive research across federal, state, and local AI implementations, including conversations with contractors and government agencies. Six critical insights emerged from this work. Together, they address the strategic, organizational, and leadership dimensions that determine whether AI transformation succeeds or stalls.
AI TRANSFORMATION ISN’T JUST A TECHNICAL CHALLENGEThe biggest barriers to successful AI adoption in government aren’t technical. They are human. Research consistently shows that organizational and cultural factors – not the limitations of technology – are the primary obstacles that prevent organizations from becoming AI-enabled. Yet most agencies approach the task of AI transformation primarily as a technical procurement exercise, investing heavily in infrastructure while underinvesting in workforce development and cultural change.
The agencies that will succeed are those willing to invest as heavily in their people as in their technology. This means comprehensive training that builds AI literacy across all staff levels. It means developing new competencies like data interpretation, ethical reasoning, and effective AI collaboration. It requires the creation of cultures in which experimentation is encouraged and productive failure is valued.
Consider the Department of Veterans Affairs’ mail processing automation. Success depended on helping employees understand how AI would change their roles, shifting from manual processing to exception handling. The VA achieved a 90% reduction in processing times through technology and workforce preparation.
Before you buy another AI system, ensure you’re investing equally in the organizational foundations that will determine whether that technology delivers real value.
THINK ABOUT PORTFOLIOS, NOT PROJECTS: MANAGE YOUR AI TRANSFORMATION AS AN INTEGRATED INVESTMENT STRATEGYMost agencies approach the adoption of AI capabilities by implementing isolated projects – a chatbot here, a predictive model there – without considering how these initiatives fit together. This leads to redundant efforts, missed synergies, and suboptimal resource allocation.
A portfolio management approach supports high-level decision making about a broad AI strategy. Instead of evaluating each AI initiative in isolation, treat your entire collection of AI ideas and active investments as an integrated portfolio that you balance across multiple dimensions. Maintain investments in near-term opportunities (0-12 months) that deliver quick wins, medium-term initiatives (1-3 years) that build capabilities, and long-term projects (3+ years) that position you for major future advances. Combine proven, low-risk implementations with experimental initiatives that could yield transformational breakthrough.
A portfolio approach also enables the efficient identification and sharing of infrastructure that supports multiple applications. Rather than each project building its own data governance framework, for instance, a strategic portfolio approach will allow the creation of components that can be replicated in a modular way across relevant projects.
For agencies managing annual appropriation cycles and strict accountability, a portfolio management provides the structure needed to maintain strategic coherence. Regular portfolio reviews support leaders in making evidence-based decisions about which initiatives to move forward, which to hold in place, and which to terminate.
YOU NEED TO BE OPEN TO INNOVATION POSSIBILITIES AND TO CARE ABOUT THE RISKSThe cautionary tales from failed AI implementations are sobering. The Dutch government resigned in 2021 after an AI system wrongly accused thousands of families of welfare fraud. The UK government faced fierce criticism when AI-predicted exam scores exhibited clear bias. Arkansas’s automated disability care system caused “irreparable harm.”
These failures share a common cause: the pursuit of innovative solutions without the implementation of adequate safeguards. The solution isn’t to avoid AI – it is to balance innovation with risk management from the start.
Two complementary frameworks make this practical. The OPEN framework (Outline, Partner, Experiment, Navigate) provides a systematic methodology for identifying mission-aligned opportunities, building collaborations, testing solutions, and scaling successful implementations. In parallel, the CARE framework (Catastrophize, Assess, Regulate, Exit) establishes safeguards by identifying potential failure modes, evaluating their likelihood and impact, implementing controls, and developing contingency plans.
Innovation and risk management are not opposing forces. When managed correctly, they pull in the same direction, creating the guardrails that allow ambitious goals to be achieved while maintaining public trust.
Don’t separate your innovation team from your risk management function. When they collaborate from day one, you get innovations that are both transformative and trustworthy.
PARTNERSHIPS ARE ESSENTIAL – NO AGENCY CAN GO IT ALONEAI is advancing so rapidly that no single agency can keep up alone. Success requires developing partnerships across three critical dimensions.
Internal government partnerships break down traditional silos. For instance, resource pooling enables agencies to share computing infrastructure and platforms rather than building duplicate systems.
External partnerships provide access to cutting-edge capabilities you cannot develop internally, but they need to be managed carefully. Commercial vendors often develop the most powerful and cost-efficient AI products. However, their outputs are aimed primarily at broader markets rather than being adapted to government-specific needs. Government-focused contractors can serve as a critical “translation layer” that helps adapt commercial innovations to government contexts – technically, operationally, and culturally.
Human–AI partnerships are the most fundamental form of collaboration for the age of AI, so it is essential that agencies design effective relationships between employees and AI systems. The question of how a new AI system will impact your human staff needs to be tackled at the design stage. If it is treated as an afterthought, you will spend enormous amounts of time and effort trying to properly integrate AI into workflows after deployment.
AI DEMANDS A NEW LEADERSHIP PARADIGM: THE CITO MODELTraditional technology leadership tends to focus primarily on system deployment. But this approach falls short when tackling the complex challenges of AI transformation. Instead, you need leadership that bridges technology implementation with organizational change and public service values.
Enter the Chief Innovation and Transformation Officer (CITO). The CITO operates at the intersection of a range of critical functions. Strategic alignment ensures AI initiatives advance mission objectives. Cultural leadership guides the mindset shifts that AI demands: building psychological safety, fostering collaboration, and maintaining mission focus. Technical oversight ensures systems meet standards for reliability and security. Operational excellence establishes measurement frameworks that track both implementation progress and mission impact. Perhaps most critically, the CITO builds coalitions spanning technology enthusiasts and skeptical practitioners, executive leadership and frontline employees. Sustaining transformation through political transitions and budget cycles requires this broad support.
This role requires real authority. The CITO should report directly to the agency head, participate in executive leadership meetings, and maintain budget authority for transformation initiatives. Without this positioning, even the most capable leaders will struggle to drive change.
MATURITY MODELS PROVIDE ESSENTIAL STRUCTURE, BUT YOU SHOULD LOOK FOR WAYS TO LEAPFROG AHEADTechnology maturity models exist for a good reason. Systematic progression from basic systems to increasingly complex models allows an organization to progressively acquire new capabilities in an organized way. But respecting maturity models does not mean shackling your agency to rigid, uniform progression across every department and project. A maturity model is a tool for helping you understand what resources you have and what you need to move forward. Once you collate that data, you can then make deliberate decisions about where to accelerate.
Domain-specific acceleration concentrates resources on high-priority areas and on areas where rapid advancement is technically possible. You might aggressively advance AI in one mission-critical domain where some of the key foundations are already in place while maintaining measured progression elsewhere. This isn’t about bypassing maturity stages but taking a granular approach that allows you to move through them strategically.
MOVING FORWARDThese six insights provide the foundation for an integrated approach to AI transformation that addresses many of the leadership challenges facing government agencies today. Success requires recognizing that AI implementation is fundamentally human work. It requires taking a strategic approach to portfolio management that balances quick wins with transformational projects, and using frameworks that balance innovation with rigorous risk management. It depends on building partnership ecosystems that extend your capacity far beyond what any single agency could develop alone, establishing leadership models that bridge technology and organizational change, and using maturity models strategically rather than rigidly.
AI transformation is inevitable. The question is whether it will be managed well or badly. Successfully implementing this powerful technology begins with understanding these six foundational principles and then using them to deliver on the promise of AI while maintaining the public trust.
Original Article on Careers in Government.
The post 6 Things Every Government Leader Needs to Know to Implement AI Effectively appeared first on Faisal Hoque.
The Data Within
Algorithms can’t read your inner signals. Trusting your gut means knowing when your feelings are the crucial missing data point no model can see.KEY POINTSGut instinct is needed where algorithms fail.Your internal states (emotions, physical signals) tell you things that data-driven reasoning can’t.Intuition isn’t mysticism: It’s an evolved form of pattern recognition.Two skills to develop: sharpening rapid-decision instincts and recognizing when you’re the missing data.
A few weeks ago, I found myself coughing and shivering under a blanket as the latest fall virus settled in for an extended stay. Unable to do much else, I turned on the TV and started looking for a movie to watch. But everything the streaming services suggested just felt … off. I recognized myself and my interests reflected back at me in the suggestions: a documentary about the financial system; a movie about food; a show about travel in East Asia. But none of it clicked.
Then, suddenly, I knew what I wanted to watch: a movie I’d seen with my wife and son 15 years ago. Undemanding, funny, brimming with cozy associations. It was the cinematic equivalent of chicken soup. Almost any other day, I wouldn’t have been interested. But just then, it was exactly what I needed.
The data the streaming services have on my viewing habits is more than impressive. It is clean and complete. It captures almost everything I have watched over the last decade, with the exception of a couple of hours of viewing on flights or in hotel rooms. Normally, the algorithm serves up a menu of options that includes something that will satisfy me. And that’s the thing about algorithms: They are tuned to normality. They make predictions based on statistical likelihoods, past behavior, and expectations about the continuation of trends.
I don’t get ill very often. Faced with an outlier like that, the algorithm was useless. And that was an important reminder. I spend a lot of my time enabling organizations to make data-driven decisions. But sometimes, the past can’t steer you. Sometimes, probability is just a roll of the dice. Sometimes, you have to go with your gut.
Too Fast and Too MuchMaking decisions based on what feels right or on what your gut is telling you can be a risky business. Get it wrong and you could find yourself betting your house on red in Vegas or quitting your job on a whim. That’s why I firmly believe that following the data and applying rational thought to a problem is the best way to make a decision—most of the time.
But there are all sorts of situations in which data-driven calculations won’t lead to the best outcomes. The boxer who stops to work through possible responses to an incoming blow is going to get hit in the face. Similarly, the emergency room doctor facing a patient with ambiguous symptoms and deteriorating vital signs can’t wait for every test result. Sometimes, the sheer volume of potentially relevant data points would take hours to process systematically. The experienced physician’s gut feeling that “something’s seriously wrong here” can trigger life-saving interventions faster than any attempt to work through all the data.
Gut instinct can be priceless when there’s a mismatch between urgency and the resources we have for analyzing the data. It’s not always going to be right, but letting your gut lead will almost always be better than not making a choice in time at all.
There is nothing mystical or irrational about going with your gut in situations like this. You are just tapping into a sophisticated form of pattern recognitionthat all higher animals have evolved. Our intuition compresses years of experience, thousands of subtle cues, and complex contextual factors into an immediate sense of what’s right or wrong, safe or dangerous, promising or futile.
When You Are the DataSometimes, the problem isn’t that the data comes at you too fast or that there’s too much of it. Data-driven decision-making can also fail if the dataset is missing something critical. That’s exactly what happened when I was searching for a movie. The streaming algorithm had years of viewing history, but it couldn’t know I was ill. And even if it did, it still didn’t have much data on the kind of stuff I want to watch when I feel bad.
In situations like this, you have to turn inward, because only you have the missing piece of the puzzle. In fact, when it comes to decisions about you and how you fit into the world, you are the data the answers are going to turn on. And processing your internal states and feeding them into your choices will almost always mean letting your gut do a big part of the work.
Your gut instinct pulls from a pool of internal signals that no external algorithm can account for. And that internal complexity is often going to be impossible for you to break down and rationalize. The subtle tightness in your chest in response to a suggestion; the memories triggered by a certain color or sound; the intensity of feeling that one job offer brings but another does not. This is your intuition processing signals that matter deeply, even if they can’t be quantified.
The ability to tap into that internal landscape becomes especially important when you are making creative choices or navigating major life decisions. The artist choosing their next project, the entrepreneur deciding whether to pivot, the person contemplating a cross-country move; these aren’t optimization problems waiting to be solved. They’re expressions of identity and agency. The data might tell you that San Francisco offers better job prospects, but your gut knows something about what kind of morning light fills you with joy.
These aren’t irrational preferences; they’re acknowledgments that some of the most important factors in our decisions live in dimensions that spreadsheets can’t capture. When we honor this internal data, we’re not abandoning logic— we’re recognizing that our choices express who we are and who we’re becoming.
Making More of Our InstinctsLike any other guide to making decisions, your gut is far from infallible. Here are some easy exercises to help you use it more effectively:
1. Sharpen your instincts so your gut is a better guide
Make shadow decisions: Before checking expert recommendations or consensus views, write down your immediate instinct. Track your accuracy over time to calibrate your intuition and to understand when you can rely on it.Speed drills: Give yourself 30 seconds to make low-stakes decisions you’d normally overthink (what to order at a restaurant, which book to read next). This trains your pattern recognition without giving analysis paralysis a foothold.2. Learn to look inward
Body scan check-ins: When facing a decision, pause and note three emotional or physical sensations that accompany the options. These internal signals can sometimes reveal preferences that logic alone misses.The pull test: When presented with options, close your eyes and imagine walking toward each choice. Notice which direction feels like moving with a current versus against it; that resistance or ease is data only you possess.As algorithms increasingly shape our world and our choices, it is important to maintain the tools we need to make decisions for ourselves. Our gut instinct is one of those tools. The situations in which AI is the least help—moments of crisis, creative decisions, and deeply personal choices—are often the ones that matter most.
If we want to continue to be the authors of our own lives rather than passengers on an optimized journey, we need to cultivate our gut instincts with the same intentionality we bring to skills like critical thinking and adopting new technologies.
Ultimately, to think slowly is an act of love.
[Photo: Kubista/Adobe Stock]
Original article @ Psychology Today.
The post The Data Within appeared first on Faisal Hoque.
November 2, 2025
How AI could widen the global economic divide—and why all business leaders should care
AI could be a great equalizer—or it could become the most powerful driver of inequality in human history.Over the last five years, artificial intelligence has shifted from a fringe interest to one of the most important drivers of global economic growth. So important has the technology become that the United Nations Security Council held its first open debate on artificial intelligence last month. While little of substance was achieved, a General Assembly resolution authorizing the creation of an independent scientific panel on AI may have a more enduring impact. One of the core questions this panel will seek to answer is how AI can support sustainable economic development without entrenching inequality.
The potential dangers here have deep historical parallels. AI runs on compute, cloud capacity, and data—resources that are concentrated in the hands of countries in the Global North. Africa, for example, hosts less than 1% of global data center capacity, leaving the continent reliant on expensive infrastructure abroad. Even an IT powerhouse like India hosts just 3% of global capacity, despite being home to nearly 20% of the world’s population. Meanwhile, workers across the Global South are earning as little as $2 an hour creating, cleaning, and labeling data for use in Western models.
A NEW DIGITAL COLONIALISM?To some, this looks like a digital version of the kind of resource extraction associated with the age of empires: labor and data flow inexorably north, where they create economic value, but little of this value finds its way back into the pockets of developing nations.
The reality is that these patterns are driven by market forces rather than imperial ideology, but the historical echoes are troubling nonetheless. Whatever the motivations, we know that this kind of concentration of power can do long-term economic and social damage. In some cases, the results are felt only in the underserved countries. AI systems trained to deliver healthcare to Western patients, for instance, can be dangerously inaccurate when working with other populations, limiting the transferability of the advances made in the West. Similarly, researchers at Columbia University have found that Large Language Models are less able to understand and represent the societal values of countries that have limited digital resources available in local languages.
These limitations are just the tip of the iceberg. AI is not just a productivitytool—it’s a force multiplier for innovation. It will shape how we farm, teach, heal, and govern in the future. If the Global South remains a passive consumer of imported AI systems, it risks losing not just economic opportunity but digital sovereignty. The Industrial Revolution brought extraordinary wealth to Europe and North America while locking much of the world into dependency for generations. AI could repeat that cycle—more rapidly and at an even greater scale.
WHY THIS SHOULD WORRY EVERY GLOBAL BUSINESSThe irony is that this approach hurts everyone, including the companies driving it. In terms of population, India has overtaken China while Nigeria and other African nations are enjoying booming birthrates. These countries represent tomorrow’s largest markets. Yet multinationals that treat them as data factories without trying to situate that data in its local context will find that they don’t understand the customers they will desperately need tomorrow. A model that misunderstands how most of the world thinks about family, risk, or trust is a model doomed to fail.
We have already seen how this trend can play out. The mobile money transfer company M-Pesa revolutionized banking in Kenya while Western banks were still trying to penetrate the market with credit cards. Today,Indian companies are developing chatbots that can speak to the hundreds of millions who communicate daily in so-called “low resource languages.” Unless multinationals begin to think intentionally about how they can serve these underserved populations, they will find themselves looking in from the outside once these markets mature.
THE PATH FORWARDAvoiding the dangers of “algorithmic colonialism” and earning a position in emerging markets for AI products and services requires deliberate action from governments, businesses, and global institutions. Data centers, power supply, and research capacity should be financed like roads and ports, with blended capital from development banks and sovereign funds. Without local compute capacity, nations will inevitably remain digital renters, not owners.
Governments should also establish data trusts to negotiate how their citizens’ information trains global models, including setting benefit-sharing and transparency requirements. AI annotation work should pay living wages with proper labor protections. And critically, we need investment in open-source models, multilingual datasets, and local developers, so solutions are built with communities, not just for them.
Some companies are already changing course. They are investing in local infrastructure, creating genuine partnerships, and recognizing that sustainable profits come from creating value with communities, not extracting it from them. They understand that today’s data creators and workers will be tomorrow’s consumers, and, potentially, tomorrow’s innovators as well, if they are given the chance.
AI has the potential to be a great global equalizer—or it could become the most powerful driver of inequality in human history. We have seen what happens when transformative technology is hoarded: inequality deepens, resentment grows, and instability follows. If we want to write a different story—one in which the Global North and South cocreate the future and share the benefits of artificial intelligence—we must act now, before the gap becomes unbridgeable.
4 THINGS LEADERS CAN DO TODAY TO START BRIDGING THE AI DIVIDE1. Audit your AI’s geographic blind spots today. Map where your training data comes from and which populations it represents. If more than 80% comes from Western sources, you run the risk of not being able to represent or communicate effectively with consumers from much of the world. Work to diversify your data if that is feasible, or develop localized AI systems that are trained or tuned with local data.
2. Create transparent data-sharing agreements. Develop a framework for using local data to train your models, including benefit-sharing provisions and audit rights for local data providers. Companies that move first will become preferred partners when governments start to mandate these arrangements.
3. Pay fair wages for AI work—and let your target markets know you are putting your money where your mouth is. Commit to paying local sustainable living wages plus a mark-up for data annotation and AI training work. Make this commitment public. You will attract better talent, improve the quality of your data, and build brand equity in emerging markets.
4. Launch an open-source initiative in at least one emerging market.Pick a specific challenge in a growth market—healthcare in Nigeria, agriculture in India, education in Indonesia—and commit to building an open-source solution with local developers. The relationships and market intelligence you gain will be worth more than any proprietary advantage you might give up.
[Image: wacomka/Getty Images]
Original article @ Fast Company.
The post How AI could widen the global economic divide—and why all business leaders should care appeared first on Faisal Hoque.
October 27, 2025
What AI reskilling really requires
It’s not about training individuals—it’s about transforming organizations.When Accenture announced plans to lay off 11,000 workers who it deemed could not be reskilled for AI, the tech consulting giant framed the decision as a training issue: some people simply cannot learn what they need to learn to thrive in the world of AI. But this narrative fundamentally misunderstands—and significantly underplays—the deeper challenge.
Doug McMillon, the CEO of Walmart, pointed to this bigger challenge recently when he said, “AI is going to change literally every job.” Now, if this turns out to be true, every role will have to be reimagined. And when every role changes, this is more than a change in each job or even a specific field. It implies a profound and systemic change in the nature and meaning of the work itself.
For instance, when a customer service rep’s job changes from answering questions to managing AI escalations, they are no longer doing old-fashioned customer service—they are doing AI supervision in a customer service context. Their supervisor isn’t managing people anymore; they are orchestrating a hybrid intelligence system composed of humans and AI. And HR isn’t evaluating communication skills; they are assessing human–AI collaboration capacity. The job titles remain the same, but the actual work has become something entirely different.
You cannot prepare people for this disruption by sending them to a three-day workshop on how to prompt more effectively. When the change is as systemic as this, the real question is not whether individuals can be separately reskilled. It is whether organizations can transform themselves at the scale and speed AI demands.
TWO TYPES OF TRANSFORMATIONTo understand the reskilling demands created by AI transformation, it helps to distinguish between bounded and unbounded transformations.
Bounded transformations are organizational changes that follow a predictable path, starting from specific areas of operation with well-defined capabilities to develop. They unfold in distinct stages, allowing companies to master one phase before moving to the next.
Unbounded transformations, on the other hand, are sweeping changes that affect all parts of an organization at the same time, with no single point of origin. Because they simultaneously alter job functions, competencies, processes, and performance measures in interconnected ways, they can’t be tackled piecemeal or rolled out sequentially—they demand a holistic, coordinated strategy.
The AI revolution is a paradigmatic example of an unbounded transformation, as it fundamentally reshapes how we think, work, and create value across every industry, function, and level of the organization—redefining not just individual tasks but the very nature of human contribution to work itself.
And that means that it is not enough to simply reskill employees for AI. Instead, business leaders will need to transform the entire ecosystem of work—the infrastructure, the interconnected roles, and the culture that enables change. And they will often need to do all of this across the entire organization at once—not sequentially, not department by department, but everywhere simultaneously.
There are three key dimensions that organizations need to address if they are to successfully transform themselves and reskill their workers for the AI revolution.
1. REBUILDING THE INFRASTRUCTURE OF WORKMost reskilling budgets cover workshops and certifications. Almost none cover what actually determines success: rebuilding the systems people work within.
For example, AI often now handles routine inquiries in contact centers while humans tackle complex cases. As McKinsey argues, successfully implementing this shift demands far more than teaching agents to use AI tools. Businesses must rethink operating models, workflows, and talent systems—creating escalation protocols that integrate with AI triage, metrics that measure human-AI collaboration rather than individual ticket counts, and training that builds the judgment needed to handle the ambiguous cases that AI can’t decide. Career paths and team structures must evolve to support hybrid human-AI capacity.
Very little of this work is “training” in any classical sense—rather, it is organizational architecture and system-building. And the organizations that do not undertake this work will find that their AI reskilling programs will inevitably fail.
2. THE NETWORK EFFECT: WHY ROLES MUST TRANSFORM TOGETHEROrganizational roles do not exist in isolation. They are interconnected nodes in an organizational network. When AI transforms one role, it also transforms every other role it touches.
For example, when AI chatbots handle routine customer inquiries, frontline agents typically shift to managing only complex situations, which may be more emotionally charged for the client. This immediately transforms the role of their trainers and coaches, who must now redesign their curriculum away from teaching efficient delivery of scripted informational responses toward teaching de-escalation techniques, empathy skills, and complex judgment calls. Further, team supervisors will now no longer be able to evaluate performance based on call handle times and throughput—they must instead develop new frameworks for assessing emotional intelligence and problem-solving under pressure.
The result is that holistic and comprehensive role redesign is essential if employees are to be successfully reskilled for AI. AI transformation requires synchronized change across interconnected roles—when one piece of the network shifts, every connected piece must shift with it.
3. CULTURAL TRANSFORMATIONAs Peter Drucker almost said, culture eats reskilling for breakfast. It is crucial for organizations to understand that cultural transformation is not a nice-to-have follow-on that comes after technical change. Rather, it is the prerequisite that determines whether technical change takes root at all. Without the right culture, training budgets become write-offs and transformation initiatives become expensive failures.
Consider a financial services firm training analysts on AI tools. If the culture punishes AI-assisted mistakes more harshly than human mistakes, adoption dies. If success metrics still reward “heroic individual effort,” collaboration with AI will be undermined. If executives do not visibly use AI and acknowledge their own learning struggles, teams will treat it as optional theater rather than strategic imperative.
The culture that enables AI reskilling is one built on curiosity, not certainty. This culture prizes experimentation over perfection and treats failure as data, not disgrace. Indeed, because AI tools evolve so quickly, the defining capability of an AI-ready culture is not mastery but continuous learning. Relatedly, psychological safety becomes essential: people must feel free to test, question, and sometimes get it wrong in public.
And the signal for all of this comes from the top. When leaders openly use AI, admit what they don’t know, and share their own learning process, they make exploration permissible. When they do not, fear takes its place.
In short, successful AI cultures don’t celebrate competence—they celebrate learning.
CONCLUSIONAI reskilling is not a training challenge—it is an organizational transformation imperative. Companies that recognize this will rebuild their infrastructure, redesign interconnected roles, and cultivate learning cultures. Those that don’t will keep announcing layoffs and blaming workers for failures that were always about systems, not people.
[Image: Michael H/Getty Images]
Original article @ Fast Company.
The post What AI reskilling really requires appeared first on Faisal Hoque.
October 26, 2025
The Slow-Cooked Mind
What simmering beef ribs can teach us about patience, creativity, and real insight.KEY POINTSLike beef ribs that braise for hours, good thinking can’t be rushed. The brain’s default mode needs time to wander and make connections that lead to genuine insight.Patience with unanswered questions is trust in a process that yields wisdom.
Like all of us, I’m busy, but most days I manufacture the time to cook for my family. I braise beef ribs for hours, I let stock simmer all afternoon, I julienne vegetables till they’re just right for the salad.
It’s slow, deliberate work. I move through the kitchen without hurry, letting things take the time they need. And when I do this, when I give a meal the patience it asks for, it shows. The flavors deepen. The meat falls apart with the slightest pressure of a spoon. The sauce reduces to exactly the right thickness.
It can feel like magic, but it isn’t. It’s simply care applied over time.
What Slow Cooking RevealsWhen we rush in the kitchen, we get something edible but rarely something deep or transformative. If you’ve ever eaten a stew cooked for six hours beside one cooked in 30 minutes, you know the difference. One fills you while the other nourishes you.
Thinking is like that. When we hurry to resolve a question, when we crave an instant answer, we get output—but not insight. We produce words instead of acquiring wisdom.
The best thinking, like the best cooking, takes time to develop. It needs to simmer quietly, to let ideas mix and flavors deepen. And that requires patience, a rare quality in a world that rewards speed, certainty, and immediate results.
The Science of SimmeringNeuroscientists talk about the brain’s default mode network (DMN), the system that is active when we’re not intentionally making ourselves solve a problem or perform a particular task. The DMN kicks into gear during rest and in those times when the mind wanders; it’s what takes over when we walk or shower or simply stare out of the window.
Research on the DMN shows that it helps us make unexpected connections, integrate scattered experiences into coherent meaning, and develop genuine insight. This is some of the most important cognitive work we need to do, and it kicks in only when we give it time and space.
Insight, in other words, requires time—slow thinking.
When we’re constantly consuming, responding, producing—always on high heat— we get output without depth. It’s like trying to braise at 500 degrees: You get meat that’s charred on the outside and raw in the middle.
Real transformation, in thinking as in cooking, requires sustained, patient heat.
Trusting the ProcessOf course, patience doesn’t come naturally to most human beings. It’s uncomfortable to sit with questions that refuse to yield answers. Most of us, when we don’t understand something, want to fix it, solve it, or move on. The last thing we want to do is stay with the discomfort of not knowing the answer yet.
But some questions—often, the most important ones—resist that. They ask us to stay with them, sometimes for years.
Cooking taught me how to do that. When I’m braising short ribs, I don’t hover anxiously at hour two, wondering if it’s working. I know what’s happening inside the pot, even if I can’t see it. I know that at first the meat will taste tough and bland, and that’s exactly how it should be. The magic happens slowly—cell walls breaking down, collagen melting, flavors mingling.
Even if it tastes terrible now, I know it will taste extraordinary in six hours.
Thinking is much the same. When I’m wrestling with a hard question—about work, a relationship, a choice—I remind myself that the answer may not be ready yet. My job is to keep showing up, to keep the heat steady, to let time do its work.
It’s a kind of trust—in the process, in the mind, in life itself. The answers will come, the questions will dissolve. We just have to stick with it.
Five Ways to Let Your Ideas SimmerOver time, I’ve found a few habits that help me practice this slower way of thinking:
1. Keep an “idea crockpot.” Write down the questions or problems that matter to you. Don’t try to solve them immediately. Instead, let your thinking develop and revisit them every so often.
2. Protect your simmer time. Schedule time for walks, showers, cooking, or anything that keeps your hands busy and your mind free. These are the moments when the DMN, the brain’s slow cooker, does its best work.
3. Resist the instant answer. When you’re curious about something, sit with it before Googling or asking AI. Let your mind wander around the question first. You’ll be surprised by what surfaces when you give your own thinking the first pass.
4. Practice sensory patience. Cook something slow. Garden. Build something by hand. Train your nervous system to tolerate the discomfort of waiting and watching.
5. Embrace “I don’t know yet.” When asked for your opinion, experiment with saying, “I’m still thinking about that.” It’s not indecision; it’s intellectual honesty. Some questions deserve to marinate.
Slow Thinking as a Practice of LoveTime is the one resource we can never replenish. And when we give our time to something, to some person or activity, we’re saying: This matters. When I spend hours dicing vegetables and reducing sauces for my son, I’m not just making him dinner. I’m telling him: I love you. When we give time to a question—when we stay with it long after the easy answers have run out—we’re doing more than solving a problem. We’re caring for it. We’re saying: This question, this idea, this truth is worthy of my attention.
Ultimately, to think slowly is an act of love.
[Photo: Faisal Hoque]
Original article @ Psychology Today.
The post The Slow-Cooked Mind appeared first on Faisal Hoque.


