Abby Covert's Blog

November 13, 2025

The Sensemaker’s Guide to Collaboration

You can see the mess. You’ve even got the imagination to fix it. But somehow, you can’t seem to get anyone else on board with making it better.

If this sounds familiar, you’re not alone. As sensemakers, we have a special knack for spotting chaos in systems, language, and structure. But having the vision to fix something and having the power to make it happen are two very different things.

This guide is about closing that gap. It’s about understanding what collaboration actually means when you’re the one pushing for change, and how to work with others when they hold the keys to making it real.

This article covers:

What does Collaboration mean in Sensemaking?

Collaboration in sensemaking isn’t just “working together.” It’s about finding the people who can actually help you move your ideas forward and figuring out how to align your efforts with theirs.

Think of it this way: as a sensemaker, you’re often the cart looking for a horse. You’ve got a useful load to carry—your ideas, your vision for better structure, your plans to fix the mess. But without someone else’s momentum to hitch yourself to, you’re just sitting there waiting to be useful.

Real collaboration means:

Finding alignment, not just agreement. Someone can nod along with your idea and still never help you ship it. You need people whose goals match yours, not just people who think your ideas sound nice.Understanding power dynamics. You might see the problem clearly, but if you don’t have the power to fix it and the people who do have different goals, collaboration becomes nearly impossible.Accepting that timing matters. Sometimes the best collaboration happens when you wait for the right moment, when someone else’s needs finally match what you’ve been trying to do all along.

Collaboration in sensemaking is less about convincing people and more about understanding what drives them, then finding the overlap between what you want to fix and what they need to achieve.

Reasons to Collaborate

You might be wondering: why even bother collaborating? Can’t I just fix things on my own?

Sometimes, yes. But most of the time, collaboration isn’t optional. Here’s why:

You don’t have the power to make changes alone. If someone else controls the budget, the team, the timeline, or the final decision, you need them on board. No amount of good ideas can replace actual decision-making power.Different people have different knowledge. You might understand the information structure, but someone else knows the business goals, the technical limits, or the customer needs. Collaboration helps you build a complete picture.Change is harder to undo when multiple people support it. When you collaborate well, you create buy-in. That makes your work stick instead of getting rolled back the moment you move on to something else.You need resources you don’t control. Time, money, people, tools, all of these often belong to someone else. Collaboration is how you get access to what you need.Some problems are too big for one person. Even if you have the power to fix something small, bigger messes require multiple perspectives and skill sets to solve.

The bottom line: collaboration multiplies your impact. You might have the vision, but you need other people’s resources, power, and knowledge to make it real.

Common Use Cases for Collaboration

So when should you actively seek out collaboration? Here are the situations where it matters most:

Redesigning navigation or structure across a product.
This touches design, engineering, product management, and often marketing. You’ll need all of them to make it happen.

Creating or updating a language system.
Whether it’s a content style guide, a taxonomy, or just standardizing terms across teams, language change requires coordination across everyone who uses those words.

Fixing technical debt in information architecture.
The mess might be obvious to you, but engineering owns the work to fix it. You need their time and priority, which means you need to understand what drives their decisions.

Launching a new feature or product.
This is collaboration by default. You’re working with a team from the start, and your job is to make sure the information structure supports what everyone else is building.

Responding to outside forces.
New laws, changing user needs, business pivots, these often create the pressure you need to finally fix problems that have been sitting around forever. Use that moment to collaborate when everyone’s incentives align.

Scaling systems across teams.
What works for one team might not work for another. Collaboration helps you understand different needs and build systems that serve everyone.

Post-merger integration.
When companies combine, the clash of different systems, language, and structure becomes impossible to ignore. This is prime collaboration territory, even though it’s messy.

Types of Collaboration

Not all collaboration looks the same. Understanding the different types helps you pick the right approach for your situation.

Partnership collaboration.
This is when you and someone else share power and decision-making. You’re equals working toward the same goal. This works best when you both have similar levels of authority and aligned incentives.

Support collaboration.
Someone else is driving the work, and you’re helping them succeed. Your role is to provide expertise, feedback, or resources they need. This is common when you’re a specialist or an internal expert being pulled into someone else’s project.

Leadership collaboration.
You’re driving the work, and others are supporting you. You own the vision and decisions, but you need help executing. This only works when you have the power to make final calls.

Peer collaboration.
You and your collaborators have equal standing but different expertise. Think designers and developers, or content strategists and product managers. You need each other’s skills to get the work done.

Consultant collaboration.
You’re brought in to provide an outside perspective. Your job is to spot what others can’t see (or say) and give them the tools to act on it. This type has built-in time limits and clear boundaries.

Cross-functional collaboration.
This involves people from different departments or disciplines working together. It’s often the messiest type because everyone has different goals, language, and ways of working.

Knowing what type of collaboration you’re in helps you set the right expectations for how decisions get made and who does what.

Approaches to Collaboration

Once you know what type of collaboration you’re in, you need an approach that fits the situation. Here are the most effective ways to collaborate as a sensemaker:

Start with questions, not solutions.
Ask people what drives their decisions. What are they measured on? What keeps them up at night? What would make their job easier? These questions help you understand their incentives, which is the foundation of good collaboration.

Map incentives before mapping architecture.
Before you propose any changes to structure or language, understand what each person involved is incentivized by. If their success depends on speed and yours depends on quality, you need to find the overlap before you start drawing diagrams.

Find the horse for your cart.
Don’t attach yourself to just anyone. Look for people whose momentum is already heading in a direction that helps you. If someone is incentivized by the exact thing your idea improves, that’s your collaboration partner.

Wait for the “until” moment.
Sometimes the best approach is patience. Organizations often can’t change until something forces them to. When that moment comes, be ready with your solution.

Speak their language.
If you’re talking to product managers, frame your ideas in terms of user metrics or revenue. If you’re talking to engineers, frame them in terms of technical efficiency or reducing bugs. In other words, match your message to what they care about.

Tips to Getting Started with Collaborating

If you’re new to intentional collaboration or feeling stuck, here’s how to start:

Make a list of what you want to change.
Write down the messes you see and why they matter. Be honest about which ones bug you personally versus which ones actually hurt the intention of the organization.

For each item on your list, ask: who has the power to make this change?
Not who agrees with you, who can actually approve it, fund it, or make it happen? Those are the people you need.

Research their incentives.
What are they measured on? What goals did they share in recent meetings or emails? What problems keep coming up for them? You can often find this out just by paying attention.

Look for overlap.
Where do your intentions and their incentives meet? That’s your opening for collaboration.

Start a conversation.
Don’t pitch your solution. Ask about their challenges. Listen to how they describe their goals. Look for the moment when you can say, “That connects to something I’ve been thinking about.”

Test the waters with something small.
Don’t propose a six-month project right away. Find a quick win that helps both of you and proves you can work well together.

Be ready to wait.
If the timing isn’t right, that’s okay. Keep building relationships so when the moment comes, you’re the first person they think of.

Get help from your manager.
If you’re hitting incentive walls, talk to your manager. They should be helping you navigate stakeholder dynamics and clear the path for your work.

Collaboration Hot Takes

These might sting a little, but they’re true:

Your good ideas don’t matter if no one with power cares. Being right isn’t enough. You need alignment with people who can actually act.

Most collaboration problems are actually incentive problems. If you’re constantly fighting with stakeholders, stop looking at their personality and start looking at what they’re measured on.

Managers who don’t help with incentive alignment aren’t managing. If your manager keeps telling you to “sell your ideas better” without helping you navigate stakeholder incentives, they’re not doing their job.

Sometimes the best collaboration is walking away. If you can’t find alignment and can’t change the incentives, it’s okay to stop trying. Save your energy for work that can actually move forward.

Being the expert doesn’t mean being in charge. You might know more about information architecture, ontologies, or knowledge management than anyone else, but if someone else owns the decision, you’re still in a support role. Act accordingly.

People quit over incentive misalignment more than anything else. If you’re surrounded by messes that no one will fix despite everyone agreeing they’re problems, that’s a sign of broken incentive architecture. That’s a management problem, not a you problem.

Collaboration without aligned incentives is just performance. You might have meetings and make decks and get nods of agreement, but nothing actually changes. Real collaboration requires everyone to need the same outcome.

Collaboration Frequently Asked Questions

What if everyone agrees something is a mess but no one will fix it?
This is incentive misalignment. Everyone can see the problem, but fixing it doesn’t help any of them reach their goals. Your options: find someone whose incentives would be helped by fixing it, wait until external forces make fixing it necessary, or back-burner it.

How do I collaborate with someone who has more power than me?
Understand what drives their decisions. Frame your ideas in terms of what they care about. If you can’t find alignment, you might need their manager or peer to advocate with you. And remember: you can’t force someone with more power to change. You can only influence them if it helps them.

Should I collaborate on everything?
No. Some things you can do alone, and you should. Save collaboration for work that requires buy-in, resources, or power you don’t have. Not everything needs a working group.

How do I know if collaboration is working?
Things are moving forward. Decisions are being made. You’re not rehashing the same conversations every meeting. If you’re not seeing progress after a reasonable amount of time, something about the collaboration isn’t working.

What’s the difference between consensus and collaboration?
Consensus is everyone agreeing. Collaboration is working together toward a goal, even if you don’t all agree on every detail. You need aligned incentives for collaboration, not unanimous agreement.

Can I collaborate with people who have opposite incentives from me?
Not effectively. You might be able to work in parallel on different things, but true collaboration requires at least some shared goals. If your success hurts them or vice versa, you’re opponents, not collaborators.

How do I collaborate across teams with different languages and cultures?
Start by learning how each team talks about their work and what they care about. Translate your ideas into their terms. Find the person on each team who naturally bridges gaps: they’re worth their weight in gold.

Collaboration as a sensemaker isn’t about being more persuasive or making better presentations. It’s about understanding power, finding alignment, and knowing when to push, when to wait, and when to walk away.

We can only work smoothly when incentives are aligned. Everything else is just going through the motions.

Now go find your horse.

If you want to learn more about my approach to collaboration, consider attending my workshop on 11/21 from 12 PM to 2 PM ET. Working Together: Tools for Better Team Design — this workshop is free to premium members of the Sensemakers Club along with a new workshop each month.

Thanks for reading, and stay tuned for our focus area in December – Change Management.

The post The Sensemaker’s Guide to Collaboration appeared first on Abby Covert, Information Architect.

 •  0 comments  •  flag
Share on Twitter
Published on November 13, 2025 23:00

October 17, 2025

The Sensemaker’s Guide to Return on Investment

The difference between good sensemaking and great sensemaking isn’t just about having the right answer. It’s about understanding what return you’re getting from the time, energy, and resources you put into making sense of complex problems.

This article covers:

What is Return on Investment in Sensemaking?

Return on Investment (ROI) in sensemaking is the measurable value you get back from the time and resources you put into it.

Unlike traditional ROI calculations that focus purely on money, sensemaking ROI includes things like reduced confusion, faster decision-making, better team alignment, and fewer costly mistakes down the road.

Think of it this way: every hour you spend creating clear frameworks, consistent language, and logical structures either pays dividends later or it doesn’t. The question is whether you’re tracking which investments actually work. If you can’t figure out what was better when you were done, you might have made things worse.

Reasons to Focus on a Return on Investment

You’re drowning in requests for your opinion. When everyone sees you as the person with answers, demand can quickly outpace your ability to give thoughtful responses. Focusing on ROI helps you prioritize which sensemaking work will have the biggest impact.

Your past decisions are creating problems. Quick decisions made without considering the whole system can add up to a confusing mess. ROI thinking forces you to consider long-term consequences, not just immediate solutions.

You need to justify sensemaking work to others. Stakeholders often see sensemaking as “soft” work that’s hard to measure. Having clear ROI helps you make the case for investing in better structures and processes.

You want to prevent burnout. Constantly being the go-to person for answers is exhausting. Strategic thinking about ROI helps you build systems that reduce the need for your constant input.

Your organization is scaling. What works with 10 people breaks down with 100. ROI-focused sensemaking creates scalable foundations instead of person-dependent solutions.

Common Use Cases for Return on Investment

Based on patterns I’ve seen across organizations, these are the areas where ROI questions come up most often. Each can create huge value when done well, or quietly drain resources when overdone, misplaced, or misaligned with goals.

Naming and Language Consistency

Shared language can clarify work, build trust, and prevent costly confusion, but teams can also lose months debating names that never reach users. The return depends on knowing when “good enough” is good enough.

Information Organization

Organizing information helps teams find, reuse, and act faster, until the structure itself becomes a maze. Over-modeling or premature taxonomy work can slow things down as much as chaos does.

Process Documentation

Documenting reality saves time and reduces errors. Unless it turns into performative paperwork. The return shows up when documentation reflects living systems, not when it becomes a graveyard of outdated steps.

Framework Creation

Frameworks can accelerate alignment and decision-making or paralyze teams if they’re treated as gospel. The best ones evolve with context; the worst ones outlive the problem they were meant to solve.

Types of Return on Investment

As you start to work through defining the expected return on your next investment, here are a few types of returns that I have seen repeatedly working on information architectures in all sorts of contexts.

Time Savings

When structures, language, and processes are clear, people spend less time searching, aligning, and repeating work. These savings can compound over time as the organization gains efficiency in how it learns and decides.

Cost Reduction

Cost-based returns emerge when better understanding prevents expensive mistakes. Clearer systems reduce duplication, unnecessary complexity, and the need for manual intervention. The fewer surprises and do-overs there are, the more sustainable the organization’s operations become.

Capability Building

Capability-based returns come from improving the organization’s capacity to handle complexity. Each time a framework, model, or shared practice is created, the team becomes more adaptable and independent. These returns grow slowly but can provide lasting resilience.

Risk Mitigation

Risk-based returns show up as fewer crises and less uncertainty. When roles, processes, and information are well-structured, the organization becomes better at preventing problems and faster at recovering when they happen.

Profit Growth

Profit-based returns are the most visible to leadership and investors. They result when time, cost, quality, capability, and risk improvements combine to create more capacity for innovation, stronger customer relationships, and higher margins. Profit is often the byproduct of better sensemaking across the system, not just the goal itself.

Reputation and Perception

Reputation-based returns come from being understood and trusted. When an organization communicates clearly, acts consistently, and delivers on its promises, public perception improves. This strengthens relationships with customers, employees, and partners, creating a lasting form of goodwill that can’t easily be bought or rebuilt once lost.

Approaches to Return on Investment

There’s no single way to approach sensemaking ROI, but here are some things I would recommend:

The Pilot Project Approach Pick one small, messy area and invest in making sense of it. Measure before and after. Use those results to make the case for broader investment. This works well when you need to prove value before getting resources.

The Pain Point Method Start with the most expensive confusion in your organization. What’s causing the most rework, delays, or frustration? Target your sensemaking efforts there first. The ROI is often immediately visible.

The Systems Thinking Strategy Map out how your current sensemaking decisions connect to each other. Look for places where better structure in one area could improve multiple other areas. This strategy works well when you are able to have a high level view of a broad structural issue.

The User Journey Focus Follow a specific path from start to finish and note every point where confusion slows things down. Then develop solutions that smooth the most friction-heavy parts of the journey.

Tips for Getting Started with Return on Investment

Start with “How will we know if this works?”
Before diving into any sensemaking project, get specific about what success looks like. Will it save time? Reduce errors? Improve satisfaction? Make sure you can actually measure whatever you choose.

Track your current state
You can’t measure improvement without knowing where you started. Document how long things currently take, what mistakes happen regularly, and where people get confused most often.

Pick battles you can win
Your first ROI-focused project should be something where you can show clear results quickly. Success builds credibility for bigger investments later.

Get friendly with data people
The people who manage your organization’s data know where the measurement gold is buried. They can also tell you what’s actually possible to track versus what sounds good in theory.

Document your process, not just your outcomes
Future you (and your teammates) will want to know how you got your results, not just what they were. This makes your sensemaking ROI repeatable and teachable.

Expect resistance to structure
People often resist new frameworks or processes, even when they’ll ultimately help. Plan for this resistance and have a strategy for getting buy-in.

Ask “What Keeps You Up at Night About This?”
This question disarms stakeholders enough to tell you the truth about what’s really at stake in your work. In my experience time and time again, this has been the question that unlocks the problem set.

Return on Investment Hot Takes

Most sensemaking work has terrible ROI because it’s reactive instead of strategic. Answering individual questions as they come up feels productive, but building systems that prevent those questions has much better returns.

The best sensemaking ROI comes from reducing your own workload. If your sensemaking work doesn’t eventually make you less necessary for day-to-day decisions, you’re probably doing it wrong.

Perfect frameworks have worse ROI than good-enough frameworks that people actually use. Don’t let the pursuit of the ideal prevent you from implementing something that works.

Teaching others to think like sensemakers has better long-term ROI than doing all the sensemaking yourself. Your goal should be to work yourself out of being the bottleneck.

The highest ROI sensemaking work often looks boring. Consistent naming conventions and clear documentation aren’t sexy, but they pay dividends for years.

Return on Investment Frequently Asked Questions

Q: How do I measure ROI when the benefits are mostly qualitative?
A: Look for proxy metrics. If something “improves clarity,” that might show up as reduced email back-and-forth, fewer revision cycles, or higher confidence scores in surveys. The key is finding measurable outcomes that connect to your qualitative goals.

Q: What if I can’t get data on current performance?
A: Start tracking now, even if it’s imperfect. You can estimate baselines through surveys, time-tracking exercises, or small observational studies. Having rough numbers is better than having no numbers.

Q: How long should I wait to measure ROI?
A: It depends on what you’re measuring. Time savings might be visible within weeks, while culture changes could take months. Set both short-term and long-term measurement points.

Q: What if my sensemaking investment doesn’t show positive ROI?
A: Learn from it. Understanding what doesn’t work is valuable data for future investments. Also consider whether you’re measuring the right things or if the benefits are showing up somewhere unexpected.

Q: Should I focus on easy wins or big transformations?
A: Start with easy wins to build credibility and learn your measurement process. Once you’ve proven your approach, you can tackle bigger challenges with more confidence and support.

Q: How do I balance ROI thinking with intuitive sensemaking?
A: Good sensemaking combines both. Use your intuition to identify opportunities and design solutions, then use ROI thinking to prioritize investments and measure success. They’re complementary, not competing approaches.

Understanding the return on investment in your sensemaking work isn’t just about justifying your time—it’s about making sure your efforts actually create the clarity and understanding your organization needs. By thinking strategically about which confusion to tackle first and how to measure your impact, you can transform from someone who gives good answers to someone who builds systems that help everyone find their own answers.

The goal isn’t to turn sensemaking into a purely numbers-driven practice, but to make sure your ability to create clarity is focused where it can do the most good.

If you want to learn more about my approach to strategic sensemaking, consider attending my workshop on 10/25/25 from 12 PM to 2 PM ET. A Return on Investment Matters: Building a Business Case for IA – this workshop is free to premium members of the Sensemakers Club along with a new workshop each month.

Thanks for reading, and stay tuned for our focus area in November – Collaboration in IA

The post The Sensemaker’s Guide to Return on Investment appeared first on Abby Covert, Information Architect.

 •  0 comments  •  flag
Share on Twitter
Published on October 17, 2025 00:00

September 12, 2025

The Sensemaker’s Guide to Arguing

You know that moment when you have a brilliant idea for fixing something that’s clearly broken, but then you realize the hard part isn’t figuring out the solution—it’s getting everyone else to see it too?

Welcome to the real work of sensemaking. The messy, human part where good ideas go to die unless you know how to argue for them properly.

Most people think arguing is about winning or being stubborn. But for sensemakers, arguing is how we turn individual insights into shared understanding. It’s how we test ideas against reality before they get expensive.

This article covers:

Why is there Arguing in Sensemaking?

When you’re trying to make sense of complex problems, you need ways to pressure test your thinking. Not because you enjoy conflict, but because untested ideas become expensive mistakes.

Think about it: every structural change, every new process, every “better way of doing things” has to survive contact with real people, real constraints, and real trade-offs. Arguing early helps you find the weak spots before they find you.

Good arguing helps you:

Surface hidden assumptions before they derail your proposalBuild empathy for the people who have to live with your solutionGet clearer on scope, risks, and what success actually looks likeTurn resistance into collaboration by addressing concerns head-onReasons to Argue

When you argue properly, you’re essentially stress-testing your thinking before it hits reality. You’re creating what I call “comparability” or a level(er) playing field where you can honestly evaluate different approaches against each other. This process forces hidden assumptions into the open and makes trade-offs visible before they bite you.

But arguing isn’t just about finding flaws. It’s about building confidence that your proposal can survive contact with the real world. If your idea crumbles under friendly questioning, imagine what happens when it meets actual users, tight deadlines, and budget constraints.

Most importantly, arguing keeps you from falling in love with your first idea. That initial solution that feels so obvious? It’s usually not your best work. The discipline of considering alternatives—even if you end up sticking with your original approach—makes your final proposal stronger and your reasoning clearer.

Common Use Cases for Arguing in Sensemaking

Structural changes: When proposing new ways to organize information, processes, or teams. Challenge your classification rules, question your content strategy, and stress-test your curation plans.

Resource requests: Before asking for budget, time, or people, argue through the real costs and benefits. Include the hidden costs like training, migration, and ongoing maintenance.

Priority decisions: When everything feels urgent, arguing helps you surface the real criteria for what matters most. Question timelines, challenge scope, and get clear on trade-offs.

Process improvements: Before proposing a new workflow, argue through who gets impacted and how. Consider the people who have to live with your process every day.

System design: Challenge your mental models. Question whether your structure will make sense to users. Test your assumptions about how people will actually behave.

Types of Arguments

Not all arguments are created equal. Here are five forms that reliably strengthen your work.

Evidence-based arguments: Ground your thinking in real data about real users.

Example: “Our content audit shows 40% of pages haven’t been updated in two years” is stronger than “I think our content is stale.”

Structural arguments: These examine how well a proposed structure serves its intended purpose. Question classification rules, content lifecycle, and curation requirements.

Example: “Putting product specs in the same category as marketing pages makes the structure harder to maintain long-term” is stronger than “I don’t like where those pages are.”

Trade-off arguments: These make the costs visible.

Example: “This approach requires hiring two people and six months of data cleaning” is stronger than “this might be expensive.”

Constraint arguments: These test proposals against real limits.

Example: “What if we only had half the timeline?” or “What if our main content creator leaves?” is better than “let’s cross that bridge when we get to it”

User mental model arguments: These focus on how people will actually interpret and use your structure.

Example: “Users think of pricing as part of the buying decision, not a buried FAQ item.” is stronger than “Users can’t find pricing”

Approaches to Arguments

Create an Alternative Structure Never argue for just one approach. Create at least two different structures that could meet your goals. This forces you to think harder about why one is better.

Run a Structural Argument Check Evaluate proposals against intention, information, content, facets, classification, curation, and trade-offs. If any component feels weak, dig deeper. I wrote a lengthy article on Structural Arguments that should serve as a helpful guide if you take this approach on.

Take on the Implementation Reality Test Ask “What would it actually take to build and maintain this?” Include all the unglamorous stuff like data migration, training, and ongoing updates.

Dig into the User Interpretation Challenge Test whether your structure makes sense to the people who have to use it. Don’t just ask if they like it—watch how they actually interpret it.

Go into Failure Mode Ask “What could break this?” Think about edge cases, unusual content, and situations where your rules don’t apply clearly.

Tips for Getting Started with Arguing

Document your thinking Don’t just have opinions—show your work. Write down your assumptions, your reasoning, and your evidence.

Start with intention Always connect your proposal back to the larger “why.” If you can’t explain how your structure serves the real goals, it’s not ready.

Get specific about content Vague ideas fall apart when they meet real content. Know what you’re organizing, who creates it, and how it changes over time.

Name the trade-offs Every good proposal involves giving up something to get something else. If you can’t find any downsides, you haven’t thought hard enough.

Test with real scenarios Don’t just theorize—walk through actual use cases. What happens when someone uploads something that doesn’t fit your categories?

Plan for maintenance How will your structure stay healthy over time? Who will keep it updated? What happens when priorities shift?

Arguing Hot Takes

Hot take #1: If you can’t argue against your own proposal, it’s not ready The best way to strengthen an idea is to attack it yourself first. Find the weak spots before someone else does.

Hot take #2: Most structures fail because of content, not concepts Beautiful organizational schemes collapse when they meet real content with all its messy inconsistencies and edge cases.

Hot take #3: Your users don’t care about your mental model They have their own way of thinking about things. Your structure needs to match their expectations, not your internal logic.

Hot take #4: Implementation is where good ideas go to die The best structural proposal is worthless if you can’t execute it with real people, real budgets, and real timelines.

Hot take #5: Second solutions are usually better than first solutions Your initial idea might be good, but the alternative you create to argue against it is often simpler and stronger.

Frequently Asked Questions

Q: How do I argue for changes when stakeholders seem happy with the status quo? A: Make the hidden costs of the current approach visible. Show what’s breaking, what’s inefficient, and what opportunities are being missed.

Q: What if my proposal requires resources we don’t have? A: That’s valuable information. Either scope down your proposal or make the case for why the resources are worth it. Don’t pretend expensive things are cheap.

Q: How detailed should my arguments be? A: Detailed enough to be credible, simple enough to be understood. If you’re losing people in the details, you need clearer main points.

Q: What if stakeholders focus on parts of my proposal I think are minor? A: Those parts probably aren’t as minor as you think. Pay attention to what people care about—it tells you something important about their mental models.

Q: How do I argue without seeming negative or difficult? A: I hear this a lot when people talk about argument. In trying not to seem “argumentative,” we forget that change only happens if we argue for it. The key is framing position arguments as ways to make a proposal stronger, not as ways to tear it down. Saying “Let’s pressure test this so it succeeds” lands far better than “Here’s what’s wrong with this.”

Q: Should I present multiple options to stakeholders? A: Sometimes. It can lead to better discussions, but it can also create decision paralysis. Use your judgment about whether comparison helps or hurts.

Remember: the goal isn’t to win arguments. The goal is to build better solutions. When you argue well, everyone wins because the ideas get stronger and the implementation gets smoother.

The real work of sensemaking isn’t coming up with solutions. It’s doing the hard thinking to turn those solutions into reality.

If you want to learn more about my approach to structural argumentation, consider attending my workshop on September 19th from 12 PM to 2 PM ET. How to Argue (for IA) Better — this workshop is free to premium members of the Sensemakers Club along with a new workshop each month.

Thanks for reading, and stay tuned for our focus area in October — Proving a Return on Investment.

The post The Sensemaker’s Guide to Arguing appeared first on Abby Covert, Information Architect.

 •  0 comments  •  flag
Share on Twitter
Published on September 12, 2025 00:00

August 8, 2025

The Sensemaker’s Guide to Taxonomies

Let’s cut to the chase: taxonomy is one of those words that makes people’s eyes glaze over. But here’s the thing, you’re already doing it whether you know it or not.

Every time you organize your photos into folders, sort your emails, or decide which drawer the kitchen scissors belong in, you’re creating a taxonomy. You’re making sense of the world by putting like things together and giving them names that help you find them later.

The difference between what you do at home and what we do professionally is scale, stakes, and the number of people who need to understand your logic.

What is Taxonomy?

Taxonomy is the practice of classifying things into categories and giving those categories names that make sense to the people who will use them.

That’s it. No fancy jargon needed.

It’s about creating buckets for your stuff, whether that stuff is products, content, data, or any other kind of information, and making sure those buckets have clear, useful labels.

The word comes from biology, where scientists organize living things into kingdom, phylum, class, order, family, genus, and species. But you don’t need to be a scientist to benefit from taxonomic thinking. You just need to have stuff that needs organizing.

Reasons for Taxonomy

Why bother with taxonomy at all? Because without it, you’re drowning in chaos.

Findability: When things are properly categorized and labeled, people can find what they’re looking for. When they’re not, even the best search engine can’t help you.

Consistency: Taxonomy creates shared language within your organization. Instead of everyone calling the same thing by different names, you establish what goes where and what to call it.

Scalability: As your collection of stuff grows, taxonomy keeps it manageable. Without it, growth becomes clutter, and clutter becomes paralysis.

Decision-making: Clear categories make it easier to spot patterns, identify gaps, and make informed choices about what to keep, change, or throw away.

User experience: When people can predict where to find things, they feel more confident and capable. When they can’t, they feel frustrated and lost.

Common Use Cases for Taxonomy

Taxonomy shows up everywhere, often disguised as something else:

Website navigation: Those menu items and page categories? That’s taxonomy at work, helping users understand how information is organized.

Product catalogs: Whether you’re selling shoes or software, customers need to be able to browse by category, filter by attributes, and understand what makes one product different from another.

Content management: Blog posts, articles, documents, and media files all need homes. Taxonomy provides the filing system that keeps content organized and discoverable.

Data organization: Customer records, financial information, research findings. All of this needs to be categorized in ways that make analysis and reporting possible.

Knowledge management: Company policies, procedures, best practices, and institutional knowledge need structure so people can find and use what they need.

Compliance and governance: Legal, regulatory, and internal requirements often demand specific ways of categorizing and labeling information.

Types of Taxonomies

Not all taxonomies are created equal. Here are the main types you’ll encounter:

Hierarchical taxonomies work like family trees, with broad categories at the top and increasingly specific subcategories below. Think of how a library organizes books: literature → fiction → mystery → detective novels.

Faceted taxonomies let you slice and dice the same items in multiple ways. An online store might let you browse clothing by size, color, style, and price range all at once.

Flat taxonomies keep everything at the same level, like tags on a blog post. There’s no hierarchy, just a bunch of labels that can be applied as needed.

Network or polyhierarchical taxonomies allow items to belong to multiple categories and show relationships between different branches. They’re messier but often more realistic.

Sequences are conditions-based arrangements of content that progress over time.

Folksonomies are when users create their own tags and categories organically. It’s democratic but can get chaotic quickly.

Approaches to Taxonomy

There are two main ways to build a taxonomy: top-down and bottom-up.

Top-down means starting with the big picture and working your way down to the details. You decide on major categories first, then figure out subcategories, then specific items. This approach works well when you have a clear vision of how things should be organized and enough authority to make it stick.

Bottom-up means starting with the individual items and looking for patterns that suggest natural groupings. You sort through everything you have, notice what goes together, and build categories around those relationships. This approach works better when you’re dealing with existing content or when you need buy-in from people who are close to the material.

Most successful taxonomy projects use both approaches, moving back and forth between big-picture thinking and detailed sorting until everything clicks into place.

Tips for Getting Started with Taxonomy

Start with your users, not your org chart: The best taxonomies reflect how people actually think about and use information, not how your company happens to be structured.

Use words people recognize: If your audience calls them “pants,” don’t label the category “lower body garments.” Meet people where they are, not where you think they should be.

Test early and often: Show your taxonomy to real users and watch how they interact with it. Where do they get confused? What do they expect to find that isn’t there?

Keep it simple: Resist the urge to create categories for every possible variation. Sometimes “miscellaneous” is exactly what you need.

Plan for growth: Your taxonomy should be able to handle new types of content without breaking. Build in flexibility from the start.

Document your decisions: Write down why you made certain choices. Six months from now, you’ll be grateful for the reminder.

Start small: Don’t try to organize everything at once. Pick one area, get it right, then expand from there.

Taxonomy Hot Takes

Here are some opinions that might ruffle feathers:

Perfect taxonomies don’t exist: There’s no such thing as a taxonomy that works perfectly for everyone. Stop trying to create one and focus on making something that works well enough for your specific context.

Users will break your taxonomy: Plan for it. People will put things in the wrong categories, use labels incorrectly, and generally ignore your beautiful logic. Design for human behavior, not human ideals.

Taxonomy is never finished: It’s a living system that needs ongoing attention. If you’re not regularly reviewing and updating your taxonomy, it’s already out of date.

Consensus is overrated: Sometimes you need to make a decision, agree to measure the results and move on. Waiting for everyone to agree on the perfect category name is a recipe for paralysis.

Most taxonomy problems are actually communication problems: If people can’t understand or use your taxonomy, the issue usually isn’t with the categories themselves—it’s with how you’ve explained or framed them.

Taxonomy Frequently Asked Questions

Q: How many categories should I have? A: As few as possible while still being useful. Research suggests people can handle about ~7 top-level categories before they start getting overwhelmed.

Q: Should I use single words or phrases for category names? A: Whatever makes the most sense to your users. Single words are cleaner but phrases can be clearer. Test both and see what works.

Q: What if something could belong in multiple categories? A: That’s what cross-references and polyhierarchy is for. Don’t force things into artificial boxes just to maintain hierarchy. But also remember if everything is in every category, “category” loses it’s meaning.

Q: How do I handle overlap between categories? A: Some overlap is inevitable and even helpful. The goal isn’t perfect boundaries—it’s useful boundaries. The size and nature of the overlap matter a lot when determining how to handle it. This is something live card sorting with users can really help to smooth out.

Q: Should I involve users in creating the taxonomy? A: Absolutely, but don’t expect them to design it for you. Get their input on how they think about the content, then translate that into a working system.

Q: How often should I update my taxonomy? A: Regularly but not constantly. Set up a review schedule—maybe quarterly or twice a year—and stick to it.

Q: What’s the biggest mistake people make with taxonomy? A: Making it too complicated. The best taxonomies feel obvious once you see them.

If you want to learn more about my approach to Taxonomy, consider attending my workshop on 8/15/25 from 12 PM to 2 PM ET. Building Better Buckets: Hands-on Taxonomy Design — this workshop is free to premium members of the Sensemakers Club along with a new workshop each month.

Thanks for reading, and stay tuned for our focus area in September is Argumentation

The post The Sensemaker’s Guide to Taxonomies appeared first on Abby Covert, Information Architect.

 •  0 comments  •  flag
Share on Twitter
Published on August 08, 2025 00:00

July 18, 2025

The Sensemaker’s Guide to Metadata

Metadata is everywhere. It’s the invisible backbone that makes our digital world work. Yet most people treat it like an afterthought—slapping on tags and labels without much care. That’s a mistake. Good metadata can transform chaos into clarity. Bad metadata makes everything harder to find, use, and understand.

This article covers what metadata actually is, why you should care about it, and how to approach it thoughtfully. Whether you’re organizing a personal photo collection or building enterprise systems, the principles remain the same.

What is Metadata?

Metadata is data about content. It’s data that describes, explains, or gives context to content.

Think of a library book. The book itself is the content. The catalog card with the title, author, publication date, and subject tags? That’s metadata. It helps you find the book, understand what it’s about, and decide if it’s what you need.

In digital systems, metadata works the same way. Every file on your computer has metadata—creation date, file size, who made it. Every photo has metadata about when and where it was taken, what camera settings were used. Every webpage has metadata that tells search engines what the page is about.

Metadata makes things findable, usable, and meaningful.

Reasons to Think About Metadata

Most people ignore metadata until they desperately need to find something. By then, it’s too late. Here’s why metadata deserves your attention upfront:

Finding stuff becomes possible. Without good metadata, finding information is like looking for a book in a library where all the books are randomly scattered on shelves. Sure, you might stumble across what you need eventually. But probably not.

Context stays attached. Information without context is just noise. Metadata preserves the who, what, when, where, and why that makes data meaningful. A spreadsheet called “Q3_numbers.xlsx” tells you nothing. But when the metadata shows it was created by Ahmed in Finance on October 15th for the board presentation, suddenly it has meaning.

Systems can connect and share. Well-structured metadata lets different systems talk to each other. Your customer database can connect to your email system because they both understand what a “customer ID” means. Without shared metadata standards, every system becomes an island.

Change becomes manageable. Organizations evolve. People leave. Systems change. But if your metadata is solid, institutional knowledge doesn’t walk out the door. New people can understand what exists and why it matters.

Common Use Cases for Metadata

Metadata shows up everywhere, but some patterns repeat across industries and contexts:

Content management uses metadata to organize articles, images, videos, and documents. Publishers tag articles by topic, author, and publication date. Photo agencies tag images by subject, location, and rights information. Without this metadata, content libraries become unusable quickly.

Data governance relies on metadata to track where data comes from, how it’s changed, and who can access it. In regulated industries, you need to prove your data is accurate and secure. Metadata provides that proof.

Search and discovery systems use metadata to understand what users are looking for. When you search for “red shoes” on an e-commerce site, you’re not searching the product images. You’re searching metadata tags like color, category, and description.

Asset management tracks physical and digital resources through their lifecycle. A manufacturing company needs to know which equipment needs maintenance, when it was last serviced, and who’s responsible for it. That’s all metadata.

Compliance and legal requirements often mandate specific metadata. Healthcare records need patient identifiers and access logs. Financial records need transaction details and approval chains. Legal documents need version history and confidentiality markings.

Types of Metadata

Not all metadata serves the same purpose. Understanding the different types helps you choose the right approach:

Descriptive metadata tells you what something is about. Title, author, subject, keywords, abstract. This is what most people think of when they hear “metadata.” It’s designed for humans who need to understand and categorize information.

Structural metadata explains how something is organized. Chapter headings in a book. Folder hierarchies on a computer. Database relationships. This metadata shows how pieces fit together into a larger whole.

Administrative metadata tracks the business side of information. Who owns it, who can access it, when it expires, how much it costs. This metadata supports governance and compliance requirements.

Technical metadata describes the nuts and bolts. File formats, compression settings, database schemas, API specifications. This metadata helps systems process and exchange information correctly.

Preservation metadata ensures information survives over time. Migration history, format dependencies, checksums for integrity verification. This metadata fights against digital decay and obsolescence.

Approaches to Metadata

How you create and manage metadata depends on your situation, but three basic approaches dominate:

Manual metadata creation means humans write tags, descriptions, and classifications by hand. This produces the highest quality metadata because humans understand context and nuance. But it’s slow, expensive, and doesn’t scale well. Use manual approaches for high-value content where accuracy matters more than speed.

Automated metadata extraction uses software to pull metadata from content itself. File properties, GPS coordinates from photos, text analysis for keywords. This scales beautifully and costs almost nothing. But automated systems miss context and make weird mistakes. Use automation for large volumes of routine content.

Hybrid approaches combine human intelligence with machine efficiency. Automated systems generate draft metadata that humans review and refine. Or humans create metadata templates that machines populate with specific values. Most successful metadata programs use hybrid approaches.

The key is matching your approach to your constraints. If you have unlimited time and budget, go manual. If you have unlimited content and tight budgets, you might start with automated and see where it gets you. Most people fall somewhere in between.

Tips for Getting Started with Metadata

Starting a metadata program can feel overwhelming. Here’s how to begin without drowning:

Start small and specific. Don’t try to take care of everything at once. Pick one collection, one system, or one workflow. Figure out what works there before expanding.

Focus on what people actually need. Metadata is only valuable if someone uses it. Talk to the people who will search, sort, and filter your content. What questions do they ask? What problems do they face? Design your metadata around real needs, not theoretical completeness.

Steal shamelessly from standards. You don’t need to invent metadata from scratch. Standards like Dublin Core, EXIF, and Schema.org solve common problems. Use them as starting points, then customize for your specific needs.

Make it as easy as possible. The easier metadata creation is, the more likely people will do it consistently. Use dropdown menus instead of free text. Provide templates and examples. Automate whatever you can.

Plan for change. Your metadata needs will evolve. Build flexibility into your system from the start. Use extensible schemas. Document your decisions. Make it easy to add new fields or change existing ones.

Measure and iterate. Track how people actually use your metadata. What searches succeed or fail? Which tags get used and which get ignored? Use this data to improve your approach over time.

Metadata Hot Takes

After years of working with metadata across different industries, I’ve developed some strong opinions:

Perfect metadata is the enemy of good metadata. Don’t let the pursuit of completeness prevent you from starting. Partial metadata is infinitely more valuable than no metadata.

Folksonomies beat taxonomies for many use cases. Sometimes it makes the most sense to let people tag things with their own words rather than forcing them into rigid categories. You can always clean up and standardize later. I find folksonomies to be playing double roles as metadata extraction and user research.

Metadata degrades over time. Information changes, but metadata often doesn’t. Build maintenance and review processes into your workflow, or accept that your metadata will become less accurate over time.

Context matters more than completeness. Better to have three relevant, accurate metadata fields than thirty fields that nobody understands or maintains.

The best metadata schema is the one people actually use. Academic perfection means nothing if your real users ignore it. Design for human behavior, not theoretical ideals.

Metadata Frequently Asked Questions

How much metadata is enough? Enough to solve the problems you’re trying to solve, but not so much that creation and maintenance become burdensome. Start minimal and add fields as you discover specific needs.

Should we use controlled vocabularies or free text? Both have advantages. Controlled vocabularies ensure consistency but limit expressiveness. Free text captures nuance but creates inconsistency. Consider hybrid approaches where you provide suggested terms but allow custom entries.

How do we get people to actually create metadata? Make it easy, make it valuable, and make it part of the workflow. If metadata creation feels like extra work, people won’t do it consistently. Integrate it into existing processes and show clear benefits.

What about privacy and security? Metadata can reveal sensitive information even when the underlying data is protected. Location metadata in photos, access patterns in logs, relationship information in tags. Consider what your metadata exposes and protect it accordingly.

How do we handle metadata when systems change? Plan for migration from the beginning. Use standard formats where possible. Document your metadata schema clearly. Export metadata regularly as backup. Consider metadata portability when choosing systems.

Should we outsource metadata creation? Outsourcing works well for routine, high-volume metadata like basic cataloging or keyword tagging. Keep specialized, contextual metadata creation in-house where domain expertise matters.

If you want to learn more about my approach to metadata, consider attending my workshop on July 25 from 12 PM to 2 PM ET. Tag It Right: Building Better Data Descriptions — this workshop is free to premium members of the Sensemakers Club along with a new workshop each month.

Thanks for reading, and stay tuned for our focus area in August – Thoughtful Taxonomies!

The post The Sensemaker’s Guide to Metadata appeared first on Abby Covert, Information Architect.

 •  0 comments  •  flag
Share on Twitter
Published on July 18, 2025 00:15

June 13, 2025

The Sensemaker’s Guide to Controlled Vocabularies

Have you ever been in a meeting where half the room calls something a “segment” while the other half calls it an “clip”? Or maybe you’ve built a navigation menu where the same content could logically fit under multiple labels, depending on who is doing the sorting? If you’re nodding your head right now, you’ve felt the pain that comes from messy language. The good news? There’s a tool for that.

Today we’re diving into controlled vocabularies – one of the most powerful yet often overlooked tools in our sensemaking toolkit. Let’s cut through the confusion together.

This article covers:

What is a Controlled Vocabulary in Sensemaking?

A controlled vocabulary is simply an agreed-upon set of terms that a group uses consistently. It’s a list of words that everyone promises to use the same way, to mean the same things. No fancy jargon needed.

Think of it as a language contract that says, “When we say X, we all agree it means Y – not Z.”

At its core, a controlled vocabulary helps groups of people speak the same language, even when they come from different backgrounds or departments. It’s the difference between talking past each other and truly connecting.

Reasons to Use Controlled Vocabularies

Why bother with all this word-wrangling? Here’s the straight talk:

Clearer communication. When everyone uses the same words to mean the same things, we waste less time explaining ourselves.Better findability. Content, data, and information become much easier to find when everything is labeled consistently.Reduced confusion. Those painful “wait, what do you mean by that?” moments happen less often.Simpler onboarding. New team members can get up to speed faster when there’s a shared language in place.More accurate data. Reports, analytics, and insights improve when we’re all tracking the same things under the same names.Less rework. How many hours have you lost to misunderstandings that could have been prevented with clearer terms?Common Use Cases for Controlled Vocabularies

You might be surprised at how often controlled vocabularies show up in your work:

Navigation systems: Ensuring menu items and labels make consistent sense across an entire websiteContent tagging: Helping writers and editors apply consistent categories and tagsData entry: Making sure everyone fills out forms with comparable informationSearch systems: Improving the accuracy of search results by connecting related termsProduct catalogs: Organizing products in ways that customers can actually find themKnowledge bases: Making information retrievable across teams and departmentsProject management: Ensuring everyone understands workflow status labels the same wayIndustry standards: Creating shared understanding across organizations (think medical terminology or legal documents)Types of Controlled Vocabularies

Not all controlled vocabularies work the same way. Here are the main types you’ll encounter:

Simple Lists are exactly what they sound like – straightforward collections of preferred terms. These work well for smaller sets of words that don’t have complex relationships (like acceptable status values: “draft,” “in review,” “approved”).

Synonym Rings connect different words that mean basically the same thing. They help people find what they need regardless of the specific term they use (like connecting “car,” “automobile,” and “vehicle”).

Taxonomies organize terms into parent-child relationships, creating hierarchies that show how concepts relate. They’re perfect for organizing content into broader and narrower topics (like Animals → Mammals → Cats → Domestic Cats).

Thesauri are like taxonomies but include not just hierarchical relationships but also associative ones – terms that relate to each other but aren’t directly above or below each other. They often include scope notes that explain exactly how to use each term.

Ontologies are the most complex type. They define concepts, their properties, and the relationships between them with extreme precision. They’re often used in artificial intelligence systems to represent knowledge.

Approaches to Creating Controlled Vocabularies

There are a few ways to build a controlled vocabulary, and the right approach depends on your situation:

Top-down approach: Start with high-level categories determined by experts, then work down to more specific terms. This works well when you have clear domain knowledge and defined requirements.

Bottom-up approach: Begin by gathering the actual terms people are already using, then organize them into patterns. This is great when you’re working with existing content or want to align with how people naturally talk.

Hybrid approach: Most successful vocabularies combine both methods – using expert guidance while also respecting how users actually think and talk about things.

Collaborative approach: Involving stakeholders from across your organization often leads to vocabularies that are more likely to be adopted. When people help build it, they’re more likely to use it.

Borrowed approach: Sometimes the smartest move is to adopt an existing industry standard vocabulary rather than creating your own. Why reinvent the wheel?

Tips to Getting Started with Controlled Vocabularies

Ready to bring some order to the chaos? Here’s how to begin:

Start small. Pick one area where terminology confusion causes real problems, rather than tackling everything at once.Listen first. Before prescribing terms, take time to understand what words people are already using and why.Focus on painful points. Target the terms that cause the most misunderstandings or wasted time when used inconsistently.Document everything. A controlled vocabulary only works when it’s written down somewhere accessible – not just in your head.Include definitions. Don’t just list terms; explain exactly what each one means and how it should be used.Plan for governance. Decide upfront who can add, change, or remove terms, and how that process will work.Build in feedback loops. Language evolves, so create ways for people to suggest improvements or request new terms.Think about format. Will a simple spreadsheet do the job, or do you need specialized tools to manage your vocabulary?Consider maintenance from day one. Controlled vocabularies need upkeep. Who will own that responsibility long-term?Communicate the value. Help others understand how consistent language will make their work easier, not just add another rule to follow.Controlled Vocabulary Hot Takes

Let me share a few things I’ve learned the hard way:

Perfect is the enemy of useful. An imperfect vocabulary that people actually use beats a perfect one that sits in a document nobody opens.Force rarely works. You can’t usually mandate language by decree. Focus on making your vocabulary so helpful that people want to use it.The goal is clarity, not control. Despite the name, successful controlled vocabularies are more about creating shared understanding than policing language.Natural language will always be messy. No controlled vocabulary will eliminate all ambiguity – and that’s okay. We’re aiming for “better,” not “perfect.”Technology alone won’t save you. The fanciest taxonomy tool in the world won’t help if you haven’t done the human work of reaching agreement first.Controlled Vocabulary Frequently Asked Questions

How big should my controlled vocabulary be? Only as big as necessary to solve your specific problems. Start small and grow as needed.

What’s the difference between a taxonomy and a controlled vocabulary? A taxonomy is a type of controlled vocabulary – specifically one that organizes terms into hierarchical relationships.

Who should be responsible for maintaining our controlled vocabulary? Ideally, someone who understands both the subject matter and how language works. Often this falls to information architects, content strategists, or librarians.

How do I get people to actually use our agreed-upon terms? Make it easy (build the vocabulary into tools where possible), make it visible (keep it accessible), and make it valuable (show how it solves real problems).

How often should we update our controlled vocabulary? Plan for regular reviews – perhaps quarterly for active areas. But also create channels for immediate feedback when urgent issues arise.

What tools should we use to manage our controlled vocabulary? It depends on size and complexity. A spreadsheet or shared document works for smaller vocabularies, while dedicated taxonomy management software might be needed for larger efforts.

Language is how we make sense of the world together. When we’re careless with our words, we create unnecessary confusion. Controlled vocabularies give us a practical way to build shared understanding – not by restricting creativity, but by creating a foundation of clarity that actually enables more meaningful work.

Whether you’re wrestling with inconsistent product categories, tangled metadata, or team members who seem to be speaking different languages, a thoughtful approach to vocabulary can transform chaos into clarity.

If you want to learn more about my approach to controlled vocabularies, consider attending my workshop on June 20th from 12 PM to 2 PM ET. Word Choice Wars: Building Controlled Vocabularies That Work — this workshop is free to premium members of the Sensemakers Club along with a new workshop each month.

Thanks for reading, and stay tuned for our focus area in July – Designing with Metadata.

The post The Sensemaker’s Guide to Controlled Vocabularies appeared first on Abby Covert, Information Architect.

 •  0 comments  •  flag
Share on Twitter
Published on June 13, 2025 01:00

May 9, 2025

The Sensemaker’s Guide to Diagramming & Modeling

When was the last time you felt stuck? Maybe it was a complex project with too many moving parts. Perhaps it was a difficult conversation where words alone weren’t getting your point across. Or maybe it was simply trying to understand something that felt just beyond your grasp.

Whatever it was, I bet a diagram would have helped.

This article covers:

What is Diagramming? What is Modeling?

Let’s start with a simple definition:

A diagram is a visual representation that helps someone.

That’s it. If it’s visually represented and it helps someone (even if that someone is just you), congratulations—you’ve made a diagram.

Modeling is creating a representation of something to help us understand it better.

Models don’t have to be visualized to be represented, but when we visualize models, we’re diagramming. For example, writing out the definition of diagram like the above might be the right representation of the model I am trying to teach, but a diagram might help draw the attention of the audience to the two pieces coming together.

The distinction between the two is subtle but important:

Diagramming is about creating a visual artifactModeling is about the thinking process behind itReasons to Diagram

When we face volatility, uncertainty, complexity, and ambiguity, diagrams offer us:

Stability in times of volatilityTransparency when facing uncertaintyUnderstanding of complexityClarity through ambiguityKindness to ourselves and others

That last one might surprise you, but diagrams truly are an act of kindness. They reduce cognitive load and create shared understanding. They give us a place to put our thinking so our brains can rest.

Common Use Cases for Diagramming

People diagram for countless reasons, but some common scenarios include:

Explaining complex systems to people who need to understand themFacilitating decision-making by visualizing options and relationshipsPlanning projects by mapping tasks, timelines, and dependenciesAnalyzing problems by breaking them down into manageable piecesCommunicating concepts that are difficult to express in words aloneOrganizing information to reveal patterns and insightsCreating a shared understanding among team members or stakeholdersTypes of Models

Below are some of the types of models you are likely to run into in the wild world of sensemaking.

Structural model

A structural model defines the static relationships between entities or components within a system. It answers “What exists and how is it connected?”

These models typically include classes, objects, components, or data structures and their relationships.

Example: Airbnb’s database schema

Airbnb uses a structural model to define how entities like Users, Listings, and Bookings relate.

Process model

A process model illustrates the sequence of activities, decisions, and flows in a system or workflow. It answers “What happens, in what order, and under what conditions?”

Common formats include flowcharts, BPMN diagrams, and swimlane diagrams.

Example: Amazon’s order fulfillment workflow

Amazon uses a Business Process Model and Notation (BPMN) process model to map how an order moves from purchase to delivery:

Behavioral model

A behavioral model represents how a system reacts to internal or external inputs over time. It answers “How does the system behave or respond?”

Often includes state machines, interaction diagrams, or logic models.

Example: Thermostat control logic in Nest

The Nest Thermostat uses a behavioral model to decide when to heat or cool based on sensor input.

Conceptual model

A conceptual model represents how users or stakeholders understand a system or domain. It answers “How do people think this works?”

These are often simplified, abstract representations used to align mental models.

Example: Apple’s Human Interface Guidelines

Apple’s design team maintains a conceptual model of how users understand navigation across iOS apps.

Mathematical model

A mathematical model uses mathematical expressions, algorithms, or statistical methods to simulate real-world phenomena. It answers “What can we calculate, predict, or optimize?”

Includes equations, formulas, and algorithmic logic.

Example: Netflix’s recommendation algorithm

Netflix uses a mathematical model called collaborative filtering.

Types of Diagrams

There are countless types of diagrams out there. A common way to differentiate them is qualitative vs. quantitative. I like to think about types differently. Centering is concentration on a specific point. In my book, Stuck? Diagrams Help. I propose three main centers for diagrams to have.

Time

When asking questions of when or how, time-based diagrams are helpful.

Examples:

Arrangement

When asking questions of what or where, arrangement-based diagrams are helpful.

Examples:

Context

When asking questions of which or why, context-based diagrams are helpful.

Examples:

As you can see each serves different needs and helps answer different types of questions.

By combining the kind of question you’re centering (Time, Context, Arrangement) with the type of model you’re working with (Structural, Process, Behavioral, Conceptual, or Mathematical), you can find the right kind of diagram to help you.

Here’s a quick cheat sheet:

Diagram x ModelStructuralProcessBehavioralConceptualMathematicalTimeVisual InstructionsGantt ChartTimelineFlow ChartDecision DiagramContextConcept MapJourney MapMental Model DiagramVenn DiagramScatter PlotArrangementSitemapSwim Lane DiagramBlock DiagramQuadrant DiagramBlueprintsApproaching Modeling & Diagramming

Effective modeling involves a thoughtful process:

Set an intention: What do you want this model to accomplish?Research your audience: Who needs to understand this? What do they know already?Choose your scope: What’s included and what’s excluded?Determine scale: How big is the space, and how detailed does this need to be?Iterate and refine: Diagrams are never perfect on the first try

Remember that diagramming is exploratory. You’re trying to make sense of something, so be prepared for your understanding to evolve as you work.

Tips to Getting StartedStart simple: Use basic shapes and lines. Rectangles, circles, and straight lines will get you far.Use a grid: Aligning elements creates visual order that reduces cognitive load.Be consistent: Use the same visual language throughout (don’t mix metaphors).Label everything: Clear labels prevent confusion and questions.Test with others: The only way to know if your diagram helps someone is to test it.Embrace iteration: Your first attempt won’t be perfect, and that’s okay.Don’t overdo it: When it comes to colors, shapes, and decorative elements, less is often more.Hot Takes

Here are some opinions I’ve formed over years of diagramming:

There is no such thing as a bad diagram if it helps someone. What matters is effectiveness, not aesthetics.Diagrams made for yourself can look messier than those made for others. Different audiences have different needs.Fancy tools aren’t necessary. Some of the most effective diagrams I’ve made were drawn on napkins or whiteboards.Color-coding should always have a backup. Not everyone perceives color the same way, so don’t rely solely on color to convey meaning.Labels should be as concise as possible. Aim for under 25 characters for most labels.Line crossing is usually avoidable. If your lines need to cross frequently, reconsider your layout.Diagrams can reveal insights that text alone cannot. The act of visualizing information often leads to new understanding.Frequently Asked QuestionsDo I need to be good at drawing to make diagrams?

Absolutely not! Diagrams are about clarity, not artistic skill. Simple shapes and lines are all you need.

What’s the best software for diagramming?

The one you’re comfortable with. For beginners, I recommend whatever tools you already know—PowerPoint, Google Slides, or even pen and paper work great. As you advance, you might explore dedicated tools.

How do I know if my diagram is good?

If it helps someone understand something, it’s a good diagram. Test it with your intended audience and iterate based on their feedback.

What if my diagram gets too complex?

That’s a sign you might need multiple diagrams or a different approach. Consider breaking it down into smaller, more focused diagrams.

How much information should I include?

Include only what’s necessary to meet your intention. When in doubt, start with less—you can always add more if needed.

Should I make my diagram pretty?

Focus on clarity first. Visual appeal can enhance a diagram but should never come at the expense of understanding.

If you want to learn more about my approach to diagramming and modeling, consider attending my workshop on May 16 from 12 PM to 2 PM ET. Blueprint Basics: Making Models That Actually Help — this workshop is free to premium members of the Sensemakers Club along with a new workshop each month. Thanks for reading, and stay tuned for our focus area in June – Controlling Vocabularies

The post The Sensemaker’s Guide to Diagramming & Modeling appeared first on Abby Covert, Information Architect.

 •  0 comments  •  flag
Share on Twitter
Published on May 09, 2025 00:33

April 10, 2025

The Sensemaker’s Guide to Stakeholders

If I had a nickel for every time someone told me their information architecture project failed because “the stakeholders just didn’t get it,” I’d have enough money to buy everyone reading this a coffee. Here’s the truth that nobody tells you when you’re learning about IA: the best structure in the world won’t survive contact with misaligned stakeholders.

Think about it. How many brilliant navigation systems have been destroyed by last-minute executive opinions? How many carefully crafted taxonomies have been mangled by committee thinking? How many elegant content models have crashed against the rocks of organizational politics?

The gap between a perfect IA solution and successful implementation isn’t technical – it’s human. And that’s why stakeholder management isn’t a nice-to-have skill; it’s the difference between your work living or dying.

So let’s talk about the part of IA that isn’t about boxes and arrows, but about the people who need to understand, support, and defend those boxes and arrows when it matters most.

This article covers:

Who is a Stakeholder in Sensemaking?

A stakeholder is someone who has a viable and legitimate interest in the work you’re doing. Sometimes we choose our stakeholders; other times, we don’t have that luxury. Either way, understanding our stakeholders is crucial to our success. When we work against each other, progress comes to a halt.

Stakeholder management in IA isn’t about manipulating people to get your way. It’s about creating shared understanding and momentum toward making sense of a mess together.

Reasons to Invest in Stakeholder Management

Let’s be honest – we love the satisfaction of a perfectly structured taxonomy, the elegant simplicity of intuitive navigation, and the quiet power of a well-crafted site map. Given the choice between sketching out another user flow diagram or sitting through a stakeholder meeting filled with competing opinions and office politics, many of us would choose the diagram every time.

But here’s the uncomfortable truth: the most beautiful, logically sound information architecture in the world is completely worthless if it never gets implemented. And implementation doesn’t happen through diagrams – it happens through people.

The stakeholders who control budgets, influence decisions, create content, build systems, and ultimately determine whether your carefully crafted solutions ever see the light of day are the difference between theory and practice.

The most successful information architects understand that their job isn’t just to organize information – it’s to organize people around information. This means developing relationships with stakeholders isn’t just a necessary evil or administrative overhead – it’s the foundation that makes everything else possible.

Because:

Language clarity depends on it – Without shared language with stakeholders, you’ll work twice as hard to communicate half as clearly.Structures need defenders – The best IA in the world can be bulldozed overnight by a stakeholder who doesn’t understand or believe in it.Messes are subjective – What looks like a well-oiled system to one person can be an absolute disaster to another. Understanding these perspectives is essential.Reality has many players – Stakeholders bring context you don’t have access to otherwise.Tension kills momentum – When stakeholders feel misunderstood or sidelined, resentment builds, and progress stalls. And while momentum stalls, messes grow larger and meaner.Common Stakeholder Management Scenarios

Here are five common scenarios where stakeholder management makes or breaks your IA work:

Disputes over what to call things – When legal, marketing, product, and engineering all have different names for the same concept.Lack of clarity on what things “are” – When everyone seems to be talking about the same thing but using different terms.Overlapping functionality – When teams unknowingly duplicate efforts or contradict each other’s work.Unclear priorities – When there’s no shared understanding of which audiences or goals matter most.Technical debt discussions – When addressing the pile of things that are issues with your technical setup as a result of prior decision-making.Types of Stakeholders

Stakeholders come in many roles and power hierarchies, so in my Stakeholder Interviewing Guide, I outline the following list of types of stakeholders:

People who could change the course of your project mid streamPeople signing off on your workPeople who will help you make the thing you are makingPeople that sign off on the work of your partnersPeople who will market the thing you are makingPeople who will measure the success of the thing you are makingPeople who will maintain the thing you are makingPeople who will need to translate/interpret/adapt the thing you are makingPeople who will provide customer service on the thing you are making

An important point to make here is: the org chart lies. The most important people for your project aren’t always the ones with fancy titles. Look for the “translators” who bridge departments, the folks everyone runs to with questions, and those quiet maintainers who actually keep things running. Power doesn’t always match influence. Find your real stakeholders by following the trust paths. And when in doubt, ask everyone you deem a stakeholder: Who might I talk to next? Their network and knowledge of who matters, matters.

8 Approaches to Stakeholder Management

When working with stakeholders in IA projects, there’s definitely no one-size-fits-all approach. Different situations and organizational cultures may require different styles of engagement. Here are eight stakeholder management styles worth considering:

The Coalition Builder

This approach focuses on identifying allies and building supportive networks before tackling resistance. Coalition builders spend significant time forming relationships with influential stakeholders who share their vision. They create a foundation of support that makes it harder for resistant stakeholders to block progress.

When to use it: In highly political environments where individual power is limited, or when attempting significant organizational change that requires broad support.

Example technique: Create a stakeholder map that identifies not just roles but relationships between stakeholders. Use this to identify potential allies and build a sequence of small wins that gradually expand your coalition.

The Educator

This style treats stakeholder management as primarily an educational opportunity. The focus is on increasing stakeholders’ literacy in information architecture concepts and principles, empowering them to become advocates for good IA within their own domains.

When to use it: When working with stakeholders who will be long-term collaborators, or in organizations where IA awareness is low.

Example technique: Create brief “IA Concept of the Week” materials that introduce one simple principle at a time. Use real examples from the organization to illustrate how the concept applies to their work.

The Strategist

Strategists align IA initiatives with business metrics and strategic goals that stakeholders already care about. They speak the language of business value first, IA principles second.

When to use it: When working with executive-level stakeholders or in organizations where financial or strategic drivers dominate decision-making.

Example technique: Frame IA improvements in terms of their impact on key business metrics like conversion rates, support costs, or employee productivity. Create before-and-after scenarios that demonstrate tangible ROI.

The Embedded Partner

This approach involves becoming temporarily “embedded” within stakeholder teams to experience their challenges firsthand. By working directly alongside stakeholders, the IA practitioner gains credibility and deeper contextual understanding.

When to use it: For complex projects spanning multiple departments or when stakeholder trust is initially low.

Example technique: Schedule “embed days” where you sit with different departments, observing their workflows and pain points before proposing solutions.

The Workshop Facilitator

Rather than conducting individual interviews, this style brings stakeholders together in structured workshop settings where they can collectively surface and resolve tensions through guided activities.

When to use it: When conflicts are primarily due to miscommunication rather than fundamental disagreements, or when time constraints make individual interviews impractical.

Example technique: Card sorting exercises with stakeholders to surface different mental models, followed by collaborative mapping to find common ground.

The Documentarian

This approach focuses on creating clear, accessible documentation that makes IA decisions and their rationales transparent to all stakeholders. The emphasis is on building an explicit shared understanding through comprehensive documentation.

When to use it: In regulated industries where decisions must be traceable, in organizations with high turnover, or with distributed teams.

Example technique: Maintain a decision log that captures not just what was decided, but why, including which stakeholders were consulted and what alternatives were considered.

The Agile Incrementalist

This style breaks down stakeholder engagement into small, iterative cycles, demonstrating value quickly and adjusting based on feedback.

When to use it: When stakeholder trust needs to be built gradually, or when the full scope of IA work is difficult to define upfront.

Example technique: Create “minimum viable taxonomies” that address one specific pain point, demonstrate their value, then expand scope based on success.

The Challenger

Some situations call for a more provocative approach that challenges stakeholders’ assumptions directly. Challengers use thought experiments, worst-case scenarios, and devil’s advocate positions to push stakeholders beyond their comfort zones.

When to use it: When stakeholders are stuck in “this is how we’ve always done it” thinking, or when potential risks are being overlooked.

Example technique: Create “anti-personas” or “dark pattern” examples that illustrate the consequences of poor IA decisions, making abstract risks more concrete.

The most effective stakeholder management often involves flexibly switching between these styles based on the specific stakeholders, organizational context, and project phase. The key is recognizing which approach is most likely to create momentum and alignment in your particular situation, rather than sticking rigidly to a single style.

Choosing the Right Stakeholder Management Style: A Decision Guide

Selecting the most effective stakeholder management approach can make or break your information architecture project. Different organizational contexts and stakeholder dynamics call for different styles. Here’s a practical guide to help you determine which approach might work best in your specific situation.

See Full Size Decision Tree Here

Before choosing a stakeholder management style, ask yourself:

What’s the political landscape?

Competing interests among departments often create resistance to new information architecture initiatives, requiring careful navigation of organizational politics. Understanding who holds decision-making power and identifying potential allies will help you build the necessary support to overcome barriers.

What’s the IA literacy level?

Stakeholders with limited understanding of information architecture concepts may struggle to see the value in proposed solutions, leading to unnecessary pushback or unrealistic expectations. Assessing their knowledge gaps allows you to develop targeted educational approaches that build their confidence and ability to contribute meaningfully to IA discussions.

What’s your timeline?

Tight deadlines may force you to prioritize quick alignment over deep stakeholder engagement, shifting your focus to minimum viable solutions that can be implemented rapidly. Longer timelines offer opportunities for more collaborative approaches where stakeholders can be fully immersed in the process, resulting in stronger buy-in and more refined solutions.

What’s the organizational culture?

Documentation-heavy organizations often require formal presentations, detailed specifications, and sign-off processes at each project stage, extending timelines but creating clear accountability. Agile environments typically favor rapid prototyping, frequent iteration, and less formal stakeholder engagement methods, which can accelerate implementation but may create challenges in maintaining consistent stakeholder alignment.

What’s the history?

Stakeholders who have experienced failed IA initiatives in the past may approach new projects with skepticism and heightened scrutiny, requiring you to rebuild trust through early wins and transparent communication. Understanding this history helps you address specific concerns proactively and demonstrate how your approach differs from previous unsuccessful attempts.

Mixing Styles

Remember that these styles aren’t mutually exclusive. The most effective stakeholder managers adapt their approach as the project evolves:

For example, you might …

Start with The Educator to build basic literacySwitch to The Coalition Builder to gather supportUse The Workshop Facilitator at key decision pointsEmploy The Documentarian to solidify decisions

You may also employ different styles with different groups …

An Agile Incrementalist with the product teamA Strategist with the leadership teamA Coalition Builder with the design teamAn Embedded Partner with the data teamContextual Factors to Consider

Your choice may also be influenced by:

Organizational hierarchy – Flat organizations may respond better to collaborative styles like Workshop FacilitatorIndustry norms – Tech companies might embrace Agile Incrementalist while financial services might prefer DocumentarianProject visibility – High-profile projects may require more Strategist elementsTeam distribution – Remote or global teams often need more structured documentationYour own strengths – Choose styles that leverage your natural abilities while stretching into new approaches

The flow chart above provides a structured way to think through these decisions, but always be ready to adjust your approach based on feedback and changing circumstances.

Tips for Getting Started with Stakeholder ManagementDesign with, not for – Your stakeholders should be able to influence and react to your tools and methods. Get them involved early.Mind your language – Create a list of terms to explore with stakeholders. Define each simply. Document the history, alternatives, and myths associated with each.Don’t hide – Working on IA independently at your desk isn’t practicing information architecture. It’s hiding.Prepare for tension – Fear, anxiety, and linguistic insecurity get in the way of progress. To get through tension, understand stakeholders’ positions and perceptions.Document arguments – Create structural arguments that consider intent, information, content, facets, classification, curation, and trade-offs.Create measurement plans – Define specific goals with stakeholders that have intent, baseline, and progress elements. Use indicators to track movement.Stakeholder Management Hot TakesIt’s easy to reach agreement alone. Getting everyone to agree is hard but necessary work.The sum is greater than its parts. We need to understand many pieces together to make sense of what we have.Your language choices say a lot about you. How you classify and organize things reflects your intent but also your worldview, culture, and privilege.Be the filter, not the grounds. Sensemaking is like removing the grit from the ideas we’re trying to give to users. What we remove is as important as what we add.“Hey, nice IA!” said no one, ever. People don’t compliment information architecture unless it’s broken. If you practice IA for the glory, get ready to be disappointed.Stakeholder Management Frequently Asked Questions

Q: How do I get buy-in for stakeholder interviewing? A: Make a clear case for collaboration. Position it as a knowledge transfer activity that will better allow you to navigate the project together.

Q: How do I talk about IA with stakeholders who don’t know what it is? A: Don’t say, “Let’s do some information architecture.” Instead say, “Wow, we have lots of information floating around about this, huh? I think I can help by bringing some structure to how we think about this.”

Q: What if stakeholders see the world differently than I do? A: That’s normal and expected. Your job is to uncover those differences and find a path forward that serves both stakeholders and users.

Q: How do I handle disagreements about classification? A: Create at least two structural arguments to compare options. Consider intent, information, content, facets, classification, curation, and trade-offs for each.

Q: How do I balance stakeholder needs with user needs? A: Your job is to uncover the differences between what stakeholders think users need and what users think they need for themselves. Bring data from both sides to the table.

The irony of information architecture is that its success depends as much on how you navigate the human landscape as how you structure information. In the end, even the most perfectly architected system can be derailed by a single stakeholder who wasn’t properly engaged, while a technically imperfect solution with strong stakeholder alignment often thrives.

I’ve spent years watching incredible IA work fail not because the labels were wrong or the navigation paths were flawed, but because the people side of the equation was treated as an afterthought. By contrast, I’ve seen modest structural improvements create outsized impact because stakeholders understood, believed in, and advocated for them.

Remember: information doesn’t exist without people to interpret it. The stakeholders who shape, maintain, and influence your work aren’t obstacles to work around – they’re the essential context that makes your work meaningful and sustainable.

So before you open another Figma file or create another sitemap, take a moment to map the human terrain. Your future self will thank you.

If you want to learn more about my approach to stakeholder management, consider attending my workshop on April 25 from 12 PM to 2 PM ET. Mapping the People Puzzle: Tools for Stakeholder Management — this workshop is free to premium members of the Sensemakers Club along with a new workshop each month.

Thanks for reading, and stay tuned for our focus area in May – Diagramming & Modeling

The post The Sensemaker’s Guide to Stakeholders appeared first on Abby Covert, Information Architect.

 •  0 comments  •  flag
Share on Twitter
Published on April 10, 2025 23:59

March 14, 2025

The Sensemaker’s Guide to Heuristics

Picture this: You’re standing in front of a mess that needs making sense of. Maybe it’s a product that’s grown too complex, a process that’s gotten unwieldy, or a team that’s lost its way. You know something needs to change, but where do you even start?

Enter heuristics – your trusty flashlight in the dark. Not quite rules (too rigid), not quite guidelines (too vague), but something wonderfully in-between. They’re the “if you remember nothing else, remember this” principles that help you tackle big problems without losing your mind.

I created my own set of information architecture heuristics back in 2011, sitting in a kitchen that smelled like blueberry pie. I was trying to figure out how to evaluate digital experiences in a way that actually made sense to everyone involved – not just the specialists. What started as a way to make seemingly subjective decisions more systematic turned into something bigger: a set of practical principles that have helped countless teams make better choices, faster in the form of my popular IA Heuristics Poster and Workbook.

Here’s the thing about heuristics – they’re not about being perfect. They’re about being good enough to move forward. Think of them as well-worn paths through a dense forest. Sure, there might be other ways through, but these paths? They’ll help you avoid the worst of the thorns.

Here’s another thing I have learned about heuristics – some people are allergic to the word, but all of us agree our expertise is built on our heuristic knowledge of the contexts, and mediums in which we work. I think it is incredibly ironic that a word that means “rule of thumb” would be seen as “too complex or academic” just by the sound. But humans, we are fickle. We don’t like 10 dollar words like those as a group, almost as a rule.

So let’s dig into this 10 dollar word and see if we can find some priceless wisdom.

This article covers:

What are Heuristics?Reasons to use HeuristicsCommon Use Cases for HeuristicsTypes of HeuristicsApproaches to HeuristicsTips to Getting Started with HeuristicsHot Takes about HeuristicsFrequently Asked Questions about HeuristicsWhat are Heuristics?

Heuristics are guiding principles that help us navigate complexity without getting lost in the details. Heuristics are much more than just evaluation criteria – they’re thinking tools that help us make better decisions faster.

Before we get into the power of heuristics, let’s talk about where standards fit in. Standards like ISO 9241 for interaction design or WCAG 2 for accessibility provide specific requirements and measurements. They tell us exactly what must be done to be compliant. Heuristics, on the other hand, are more flexible principles that help us think through problems and make better decisions. Think of standards as the laws regarding the road, while heuristics are more like principles of good driving.

In sensemaking work, we use both: standards to ensure we meet necessary requirements, and heuristics to help us make good choices in the spaces between those requirements. With that distinction in mind, let’s look at the reasons why you might turn to heuristics.

Reasons to Use Heuristics

Why do we need these principles? Because making sense of complex situations is hard, and we need reliable ways to:

Cut Through Complexity

Complex problems can paralyze teams with too many options and unclear priorities. Heuristics give us a systematic way to break down big challenges without getting lost in the details. They help us focus on what matters most while keeping the bigger picture in view.

Make Consistent Decisions

Good decisions shouldn’t depend on who happened to be in the room that day. Heuristics provide teams with shared criteria for judgment, leading to more consistent choices across projects and over time. I’ve watched teams move from circular debates to productive discussions just by having clear principles to guide them.

Build Shared Understanding

When everyone on a team knows what “clear” or “useful” means in their context, collaboration gets easier. Heuristics create a common language for quality that transcends individual preferences and helps teams align on what “good” looks like.

Create Frameworks for Critique

Critique sessions without structure can quickly devolve into opinion battles. Heuristics provide a framework for giving and receiving feedback that focuses on substance rather than style. They help turn “I don’t like it” into specific, actionable feedback.

Avoid Common Pitfalls

Why stumble into the same holes others have already discovered? Heuristics capture hard-won wisdom in an actionable format. They’re like having a cheat sheet of lessons learned by those who’ve already walked the path you’re on.

The beauty of heuristics is that they give us just enough structure to move forward without becoming rigid rules that stifle creativity or innovation.

Common Use Cases for Heuristics

In my years of teaching and practicing, I’ve seen heuristics used effectively in:

Evaluating Existing Systems

When facing a mess, teams often rush to fix things without truly understanding what’s broken. Heuristic evaluation helps identify actual problems versus surface-level symptoms. I’ve seen teams save months of work by taking this structured approach to assessment.

Planning New Initiatives

Starting a new project without principles is like setting sail without a compass. Heuristics help teams chart a course that avoids known pitfalls while keeping the end goal in sight. They turn overwhelming projects into manageable steps with clear success criteria.

Training Team Members

Every team develops its own way of working, but quality should be consistent. Using heuristics in training helps teams develop shared standards without squashing individual creativity. It’s about teaching principles that guide decisions, not rigid rules that limit options.

Facilitating Productive Critiques

Structure transforms critique sessions from subjective opinion-sharing into productive discussions. When teams evaluate work against clear principles, they can focus on how well something meets established criteria rather than personal preferences.

Building Shared Vocabulary

Clear communication requires shared understanding. When teams agree on what terms like “accessible” or “consistent” mean in their context, they spend less time debating semantics and more time solving problems. I’ve watched teams cut meeting times in half just by getting this alignment.

Creating Measurement Frameworks

You can’t improve what you can’t measure, and you can’t measure without clear criteria. Heuristics provide specific qualities to evaluate, making it easier to track progress and demonstrate value. They help answer the critical question: “How do we know if this is working?”

Types of Heuristics

From my experience working with heuristics across different contexts and organizations, I’ve found it useful to think about them clustering into three main categories. This isn’t an established framework, but rather a practical way of organizing principles that often emerge in sensemaking work.

Structural Heuristics

Structural heuristics guide how we organize and connect information, systems, and processes. They help us evaluate whether the bones of what we’re building make sense and will support the weight of what we’re trying to achieve. When called in to help make sense of a messy taxonomy, we might look for patterns in how things are grouped and arranged. Are similar things being called different names across departments? Are important relationships being obscured by how things are organized? Structural heuristics help us spot these issues and suggest better ways to organize information so relationships become clear and navigation becomes intuitive.

Structural Principles tend to focus on:

Organization and groupingNavigation and findabilityConsistency and patternsVisual/spatial relationshipsExperiential Heuristics

Experiential heuristics address how people interact with and feel about what we create. When working on a project to improve internal documentation, we don’t just look at whether information exists – we examine how people find and use it in their daily work. Are they avoiding certain tools because they’re frustrating? Are they creating workarounds because the official process is too cumbersome? Experiential heuristics help us identify where friction points exist and how to reduce them, making sure solutions work in reality, not just on paper.

Experiential Principles tend to focus on:

User feelings and responsesEase of understandingSatisfaction and delightAccessibility and usabilityProcedural Heuristics

Procedural heuristics guide how work gets done and decisions get made. When helping teams develop better ways of working, we can look at how decisions flow through an organization. Where do things get stuck? Why do some processes work smoothly while others constantly hit snags? One team I worked with had a perfectly logical approval process that nonetheless created constant bottlenecks. Procedural heuristics helped us identify that the issue wasn’t the process itself, but how it handled exceptions and special cases. This led to a more flexible system that worked better in practice.

Procedural Principles tend to focus on:

Control and recoveryError handlingEfficiency and automationSystem behavior management

We often need to employ heuristics from each of the above three types in order to create a well-rounded list that is suitable to our purpose. It is important to consider whether you have enough coverage in each type when creating your own list of heuristics.

Approaches to Heuristics

The key to using heuristics effectively is finding the right balance between rigor and flexibility. Here’s how I recommend you approach it:

Start with a Review of Established Heuristic Frameworks

I think perhaps the single most important thing I can point out is that while many lists of heuristics exist and are of great usefulness, often creating your own unique list is a better investment than choosing an established one and using it wholesale. That said, while not likely to serve ALL your needs, bringing in principles from many sources is the next right step for many heuristic endeavors.

Here is a non-exhaustive list of established frameworks touching on sensemaking heuristics:

Herbert Simon Bounded Rationality & Satisficing (1955)Daniel Kahneman & Amos Tversky Heuristics and Biases (1974)Nielsen/Molich’s Usability Heuristics(1990)Robin William’s CRAP Principles (1994)Gerd Gigerenzer Fast-and-Frugal Heuristics / Adaptive Toolbox (1999)Morville’s UX Honeycomb (2004)ISO 9241 Ergonomics of Human System Interaction (2006)Gerhardt-Powals Cognitive Engineering Principles (2009)Resmini/Rosati Pervasive IA Heuristic (2011)Norman’s Revised Design Principles (2013)Google’s HEART Framework (2016)Resource-Rational Heuristics (2021)Pope’s Heuristics for Content (2024)

… and since I am an information nerd to my core, and made a spreadsheet for my own purposes while noodling on this piece, here is a full listing of all of the relevant principles from all of aforementioned frameworks, neatly organized by type, and sortable by year: A historical menu of heuristic gold. If your pulse just quickened, you’re welcome, you absolute nerd.

Customize for Your Needs

Now that you have calmed yourself with the knowledge that all established frameworks were made by smart folks just like you, we can move into the real work of establishing a custom list of heuristics that are specific to your needs.

Here are the top three lessons I have learned in helping teams establish heuristic lists for various purposes.

Build on or integrate principles that already exist in your organizational culture

Many organizations already have principles that they align to from a cultural or even ethical standpoint. Laddering to or repurposing the framing for your set can be effective in some cultures.

At Netflix, their widely-known cultural value of ‘Context over Control’ became a foundation for their design system principles. Instead of creating new rules about component flexibility, they built on this existing principle: ‘Components should provide context for proper use rather than rigid controls that limit innovation.’

Modify language to match your culture

There are words that smell right, and some that smell wrong in the context of an organizational culture. Pay special attention to that. If for example your organization is allergic to experts, or academics, pay special attention to even use of a word like “heuristic” — I can promise the results are just as sweet by any other name 😉

At a company that was full of artisans, we changed ‘Information Architecture Heuristics’ to ‘Navigation Ground Rules.’ Instead of saying ‘Maintain clear taxonomic relationships,’ we wrote ‘Put things where they make sense to the people who make things.’ Usage of the principles jumped dramatically after this simple language shift.

Add specific examples relevant to your work

The best way to get across a heuristic is to use an example of it being violated. Aim for an example that is specifically relevant to your work.

When working on one particular system, we had the principle ‘Always show system status.’ What made it real: showing the team a payment screen that only displayed ‘Processing…’ with no indication if it was stuck or working. That example clicked instantly as we have all experienced the sense of ‘hanging’ without a status.

Test and Refine

Once you have a list you think might be helpful, its time to see if you are right. As is true with any other information architecture, the only way to know if it works is to ask users if it makes sense for their purposes.

Apply heuristics in real situations

The only way to know if your heuristics work is to use them on actual projects. Start small with a single project or problem rather than rolling them out across your entire organization at once. Pick a project where the stakes aren’t too high but the work is complex enough to test your principles thoroughly.

Document how your team uses the heuristics during the project. Note which ones come up often and which ones rarely get mentioned. Pay attention to moments when someone says “this doesn’t seem to fit our situation” – these are gold for refining your list.

Gather feedback from users

Don’t just ask your team what they think of the heuristics – look at how the end results are received by actual users. Set up simple ways to collect feedback that directly relates to your principles. For example, if one of your heuristics is about clarity, ask users specific questions about whether information was easy to understand.

You might even eventually create a simple scoring system where users can rate how well your product meets each principle.

Iterate based on results

After using your heuristics and gathering feedback, be ready to make changes. This might mean:

Removing principles that don’t actually help your specific workAdding new ones that address problems you keep running intoRewording principles to make them clearerCombining overlapping ideas into stronger single principlesCreating better examples that show real violations from your own work

Remember that the point of heuristics is to make work better and easier, not to create more rules to follow. If your list grows beyond 10-12 items, it probably needs trimming. The best heuristics are the ones people actually remember and use without having to look them up.

Set a regular schedule to review and update your principles, maybe every six months or after major projects. This keeps them fresh and relevant to your current challenges rather than becoming outdated rules that everyone ignores.

Tips for Getting Started with Heuristics

I know creating heuristics can feel overwhelming, so here’s how to begin without losing your mind:

Step 1: Begin with a small set of principles (5-7 maximum)

Start with just the most important ideas instead of trying to cover everything. When teams try to create comprehensive lists, they end up with principles nobody remembers. The human brain can only juggle so many things at once, so respect that limit. I’ve seen teams spend weeks crafting 25 perfect principles that nobody ever uses. In contrast, teams with 5 to 10 solid principles actually apply them daily because they can recall them without checking a document. If you can’t explain your complete set of principles in a casual 5-minute conversation, they’re too complicated.

Step 2: Write them in clear, actionable language

Skip the fancy words and academic language. When I worked with one team, they changed “Maintain consistent visual hierarchy” to “Make important stuff stand out.” Usage skyrocketed because everyone immediately understood what it meant. The best principles use words your team already uses in everyday conversations. Make each principle something people can actually do, not just abstract concepts they need to interpret. Avoid vague terms like “optimal” or “quality” that mean different things to different people.

Step 3: Include examples of both good and bad applications

Show real examples from your own work whenever possible. Nothing makes a principle click like seeing it done right and wrong with familiar material. One product team I worked with took screenshots of their own interface showing both good and bad implementations of each principle. They paired each principle with at least one clear violation that made people say, “Oh, I see what you mean now.” Generic examples rarely stick, but team-specific ones become part of your shared language. Make your examples specific enough that people can immediately see how they apply to their daily work.

Step 4: Create simple tools for evaluation

Make a basic checklist people can use during reviews without needing a training session to understand it. A design team I know created a simple 1-5 scale for rating how well their work met each principle, making feedback concrete instead of vague. Build simple yes/no questions that force clear decisions rather than allowing endless debates about interpretation. The best evaluation tools feel less like homework and more like helpful guides.

Step 5: Practice applying them regularly

Schedule short review sessions where the team applies principles together to build a shared understanding. Start using them in real projects right away rather than treating them as theoretical concepts. A content team I worked with added a “heuristic check” step to their existing workflow, normalizing principle use as just part of the job. Have team members take turns leading principle reviews to build ownership across the group. Regular practice turns principles from abstract ideas into practical tools everyone knows how to use.

Step 6: Revise based on real use

Plan to update your principles after 3 months of actual use. The first version will never be perfect, and that’s okay. Track which principles get referenced most often and which ones nobody ever mentions. Replace academic language with the actual words you hear your team using when they discuss the principles. Be willing to throw out principles that sound good in a meeting but don’t help in practice. One engineering team I advised completely rewrote their principles after six months because they realized what sounded good in theory wasn’t what they actually needed in practice.

Remember: Good heuristics make work easier, not harder. If your principles aren’t helping your team solve real problems faster, they need to change – no matter how smart they sound.

Heuristic Hot Takes

After years of working with heuristics, here are some things I’ve learned:

Perfect heuristics don’t exist – aim for useful instead

The search for perfect principles is a trap that delays real progress. I’ve seen teams spend months debating exact wording while their actual work problems remained unsolved. Your first version will have flaws, and that’s fine. A “good enough” set of principles that people actually use will outperform a theoretically perfect set that remains stuck in documentation. Start with principles that address your most pressing problems, knowing you’ll improve them as you go.

The best heuristics evolve with use

Principles should change as your work and team change. The product team at a financial company I worked with reviews their principles quarterly, removing ones that no longer fit their challenges and adding new ones based on recurring problems. Their principles shifted dramatically when they moved from desktop to mobile-first design. Static principles grow stale and irrelevant. Living principles that adapt based on real experience maintain their power to guide decisions over time.

Context matters more than comprehensiveness

Different projects need different approaches to the same principles. A design system team created three versions of their evaluation process: one for quick checks, one for formal reviews, and one for teaching new hires. Trying to create a single comprehensive framework that covers every possible scenario produces something too complex to use regularly. Simple, context-specific guidelines that match how your team actually works will see far more use than an exhaustive but unwieldy system.

Simple language beats technical terminology

When a marketing team changed their principle from “Ensure semantic consistency across touch points” to “Use the same words for the same things,” compliance increased almost overnight. Nobody needed an explanation of the second version. The moment you need to define terms within your principles, you’ve already lost most of your audience. Write as if you’re explaining to a smart friend in a completely different field. If your principles sound like they belong in an academic journal, rewrite them until they sound like something you’d say in a casual conversation.

Examples make principles real

A product team struggled with their “Be consistent” principle until they created a wall of screenshots showing five different button styles from their own app labeled “This is what inconsistency looks like.” Suddenly everyone understood. Concrete examples transform vague concepts into clear decisions. The best examples come from your own work, showing real violations that people recognize. When principles include clear examples of both good and bad applications, agreement about what “good” looks like increases dramatically.

Regular use builds muscle memory

A development team made a game of spotting principle violations during their daily standups, turning application into a habit rather than an occasional chore. Within weeks, team members were automatically thinking in terms of their principles without conscious effort. Principles reviewed once a quarter quickly fade from memory, while those applied daily become second nature. Build regular checkpoints into your existing workflow where principles are explicitly referenced, and rotate responsibility for leading these reviews to build ownership across the entire team.

Frequently Asked Questions

Here are three of the top questions I get when teaching heurisitics.

Q: How many heuristic principles should we have? A: Start with fewer than 10. You can always add more later, but too many at once becomes overwhelming. Most people can only remember and apply a handful of rules at a time. Beginning with 3-5 core heuristics often works best – this gives your team enough guidance without causing decision paralysis. As your team gains experience, you might find natural opportunities to add more specific guidelines.

Q: Should we create our own heuristic list or use existing ones? A: Start with existing frameworks and modify them for your needs. There’s no need to reinvent the wheel. Many organizations and experts have already tested and refined helpful heuristics. Look at established models in your field, borrow what works, and then adjust them to fit your specific context and challenges. This saves time and gives you a stronger starting point than creating rules from scratch.

Q: How often should we review and update our heuristics? A: Review them whenever they stop being useful. If you find yourself regularly making exceptions, it’s time for an update. Watch for signs like team members frequently bypassing the guidelines, confusion about how to apply them, or decisions that followed the heuristics but led to poor outcomes. A formal review every 3-6 months might be helpful, but the best approach is to maintain ongoing awareness of how well your heuristics are serving their purpose.

Looking Forward

Let’s bring it all together. Heuristics aren’t just fancy rules or abstract concepts – they’re practical thinking tools that help us tackle complex problems. They give us just enough guidance without boxing us in.

The best heuristics are the ones your team actually remembers and uses. Start small with 5-7 principles written in plain language everyone understands. Include real examples from your own work that show both good and bad applications. Create simple tools to help people apply these principles regularly, and be ready to change them based on what you learn. Remember that heuristics should make your work easier, not harder. If your team is struggling to use them or forgetting them entirely, that’s a sign they need to be simpler or more relevant to your actual challenges.

Whether you’re organizing information, designing experiences, or improving team processes, good heuristics help you focus on what matters most. They won’t solve every problem for you, but they’ll help you find your way through the complexity without getting lost in the details. The goal isn’t perfection – it’s progress. With the right set of principles guiding your decisions, you’ll make better choices more consistently, communicate more clearly with your team, and create work that truly makes sense.

If you want to learn more about my approach to heuristics, consider attending my workshop on March 21st from 12 PM to 2 PM ET. Heuristics in Action: Simple Checks for Better Designs — this workshop is free to premium members of the Sensemakers Club along with a new workshop each month — there are drop in tickets available. Thanks for reading, and stay tuned for our focus area in April is Stakeholders.

The post The Sensemaker’s Guide to Heuristics appeared first on Abby Covert, Information Architect.

 •  0 comments  •  flag
Share on Twitter
Published on March 14, 2025 00:12

February 13, 2025

The Sensemaker’s Guide to Measurement

Ah, measurement. Just saying the word can make a room of smart people start to squirm. We’ve all been there – trying to prove that our sensemaking work matters while grappling with metrics that don’t quite capture the whole story.

Maybe you’ve watched a team’s eyes glaze over during a metrics review, or felt your stomach drop when someone asked you to “quantify the impact” of your information architecture work.

The truth is, measurement in sensemaking isn’t just about numbers in a spreadsheet or checkboxes on a task list. It’s about noticing when people start asking different questions, spotting the moment teams begin to share understanding, and yes, sometimes even counting things.

Let’s break this down piece by piece, starting with the basics of what we mean by measurement in sensemaking work.

What is Measurement in Sensemaking?

Before we dive into the how-to, let’s talk about what we mean when we say “measurement” in sensemaking work. After all, you can’t measure what you haven’t defined, and you can’t improve what you haven’t measured.

Measurement is the practice of tracking both what we can count and what we can observe to understand if our work is helping people make better sense of things. Most folks think measurement means numbers in spreadsheets. But in sensemaking work, we’re tracking both the things we can count and the things that matter most – like when teams start speaking the same language or people stop getting lost in their work.

Measurement in sensemaking combines clear metrics (like how long it takes to find something) with careful observation of how people work and think. It means watching for tiny shifts that signal big changes – like when team meetings get shorter because everyone’s using the same terminology, or when your inbox gets quieter because people are finding answers on their own.

Think of it as gathering evidence of change, not just collecting data. Sometimes your best measure might be a story about how someone used information differently, or a pattern in the questions people stop asking. The trick is knowing what to track, when to look for impact, and how to spot the signs that your work is making a real difference.

Reasons to Measure

Teams measure their sensemaking work to understand what’s helping and what isn’t. Whether you’re organizing content, building new systems, or helping groups work better together, here’s why tracking changes matters:

To Know Where You’re Starting

You need to understand what’s really happening now before you can make it better. This means capturing how things work today – messy parts and all.

To Spot What’s Actually Changing

Change sneaks up on you. Sometimes it’s obvious, like when your bounce rate drops. But often it’s subtle, like when you notice people starting to use the same words to describe their work, or when the new hire figures something out without asking for help.

To Show Your Work Matters

We all need to prove our work has impact. Good measurement gives you concrete ways to show how sensemaking work makes a difference – not just in numbers, but in real stories about how work gets easier or problems get solved faster.

To Know When to Adjust Course

Measurement tells you if your changes are helping or if you need to try something different. It’s like having a compass – it helps you know if you’re heading in the right direction or if it’s time to redraw the map.

To Build Trust With Teams

When you measure thoughtfully, you show teams you care about what actually helps them work better, not just what looks good in a report. This builds the trust you need to keep making improvements. For instance, when a team tells you they’re struggling with a new process, measuring both their efficiency metrics and their actual experience shows you’re listening – not just checking boxes.

The point isn’t to measure everything – it’s to track enough of the right things to know if you’re making work better for real people. Sometimes that means counting things, but often it means noticing how work changes and collecting stories about what’s different.

Common Measurement Use Cases

Listen, measuring stuff in organizations is messy. We often track things just because we can, not because we should. I’ve spent years watching teams collect numbers that nobody uses and create reports nobody reads. Let’s talk about what really matters when you’re trying to figure out if your work made things better or worse.

System Changes and Migrations

Here’s the thing about system changes – they’re not really about the system. They’re about people trying to get their work done. Think of measurement like a before-and-after photo of how work happens. You need to understand both the messy reality of how people work around the old system and their struggles with the new one. Don’t get caught up in uptime metrics and server stats if people still can’t find the “save” button.

Process Improvements

Processes are just the paths people take to get work done. When you measure process changes, you’re really measuring if you’ve made those paths clearer or more confusing. The trick is to watch how work actually flows, not how it’s supposed to flow on paper. People are really good at finding workarounds – your measurements should help you understand why they need them.

Knowledge Management Initiatives

Knowledge management is a fancy way of saying “helping people find stuff they need to know.” Don’t get lost measuring the size of your knowledge base or how many documents you have. What matters is whether people can find answers when they need them, and if they trust what they find. Watch for the signs that tell you if you’re making the puzzle easier or harder to solve.

Training and Onboarding Programs

New people joining your organization are like visitors trying to navigate a city without a map. Your measurements should tell you if you’re giving them good directions or sending them in circles. Forget about training completion rates – focus on understanding if people feel lost or confident after you’ve tried to help them find their way. Watch for moments that matter: when a new hire completes their first project without asking for help, when their questions shift from ‘where do I find this?’ to ‘how can we improve this?’, or when they start helping others find their way around.

Cross-team Collaboration Efforts

Teams are just groups of people trying to build something together. When you measure collaboration, you’re really measuring if you’ve made it easier for people to understand each other and work together. Look for signs that teams are speaking the same language and building trust, not just meeting deadlines.

Remember, measurement isn’t about proving you’re right – it’s about understanding if you’re helping. Sometimes the most important measurements are the stories people tell about how their work has changed. Don’t get so caught up in collecting data that you forget to listen to what people are actually telling you about their experience.

Types of Measurement

We spend so much time reinventing wheels in our organizations that we can forget to look around and see what others have already figured out. Back in 2019, when I was working with Kristin Skinner and Kamdyn Moore on the DesignOps Summit, we noticed everyone was struggling with the same thing: how do we measure if our work matters?

So we did what sensemakers do – we dug into the mess. We gathered insights from hundreds of people doing design operations work and created a measurement framework that anyone could use. Not because we’re measurement wizards, but because someone needed to help people stop starting from scratch every time.

Let me share the eight types of measurement that kept showing up in our work back in 2019. Unsurprisingly – six years later, I am left without any new wheels to invent:

Output

Output measurements track what we produce and its direct impact. This includes both positive results and areas where we reduce waste or inefficiency.

Examples:

Revenue generated from new product featuresNumber of successfully completed projects per quarterReduction in customer support tickets after documentation updatesCost savings from process improvementsCost

Cost measurements look at both direct expenses and hidden costs that accumulate over time.

Examples:

Monthly operating expenses for team tools and softwareTraining and onboarding costs for new team membersTechnical debt from temporary solutionsResource allocation across projectsSentiment

Sentiment measurements capture how people feel about and respond to changes, products, or processes.

Examples:

Customer satisfaction scores for new featuresEmployee feedback on process changesSocial media sentiment analysisInternal team satisfaction surveysAdoption

Adoption measurements track how readily people accept and use new tools, processes, or systems.

Examples:

Percentage of team members using new collaboration toolsTime to reach X% user adoption of new featuresSpread of new practices across departmentsTraining completion and implementation ratesEngagement

Engagement measurements focus on ongoing interaction and sustained use over time.

Examples:

Repeat usage rates for new toolsActive participation in team processesContinued adherence to new workflowsRegular contribution to shared resourcesTime

Time measurements focus specifically on identifying and reducing wasted effort.

Examples:

Time saved by automating manual processesMeeting time reduced through better coordinationTask completion time before and after changesTime spent searching for informationAttrition

Attrition measurements track where and why we lose people, resources, or momentum.

Examples:

Customer drop-off points in new processesTeam member turnover ratesAbandoned projects or initiativesDeclining usage of tools or systemsExtensibility

Extensibility measurements evaluate how well solutions can adapt and grow over time.

Examples:

Ability to scale processes with team growthAdaptation of systems to new requirementsCompatibility with other tools and processesFlexibility in handling unexpected changesApproaches to Measurement

Now that we have covered the types of measurement you might use, let’s talk about how to actually do this measurement thing. There’s no one right way, but there are some approaches that tend to work better than others.

Quantitative Methods

Sure, you can count things. Sometimes you should! But don’t get stuck thinking numbers are the only way to show impact. When you do count things, make sure you’re counting stuff that matters:

Track how long it takes people to find things before and after changesCount how many times people ask the same questionsMeasure how many steps it takes to complete common tasksNote how often people need help or get stuckQualitative Methods

This is where the real gold often lives. It’s about watching, listening, and noticing patterns:

Pay attention to the language people use to describe their workNotice when questions start to change (or stop coming altogether)Watch how people move through their workListen for stories about what’s differentHybrid Approaches

The sweet spot is usually somewhere in the middle. Mix your counting with your observing:

Combine usage stats with user storiesTrack both completion times and confidence levelsNotice both what people do and how they feel about itLook for patterns in both numbers and narrativesTips for Getting Started

Listen, I know measurement can feel overwhelming. Here’s how to begin without losing your mind:

Step 1: Start with Your IntentionWrite down what you want to do, but be super specific and time-boundMake it something you can actually wrap your head around, not some vague wishPro tip: Start the statement with an action verb Example: “I intend to reduce the number of confusing terms in our product documentation by 50% by March 1st”Step 2: Get Real About Your WhyFill in the “because” part with something that actually matters to real peopleSkip the corporate speak – what’s the human reason?Think about who benefits and how Step 3: Ask a Measurable QuestionThis is your big “how will I know?” questionMake it something you can actually answer with dataAvoid yes/no questions – they’re usually too simple Example: “How many support tickets include requests for basic term clarification?”Step 4: Set Up Your MeasurementPick ONE thing you can count or track (your metric)Write down where you are now (your baseline)Write down where you want to be (your goal)Step 5: Create Your Warning SystemThink about what could go wrong or what you need to watch forSet up two specific flag conditionsFor each flag, write down exactly what you’ll do when it happens

Remember: The goal isn’t to create a perfect measurement system – it’s to make something that helps you see what’s actually happening so you can make it better.

Measurement Hot Takes

After years of helping teams figure this out, here are some truthiest truths I’ve learned:

Perfect metrics don’t exist. Stop looking for them. Focus on “good enough to help us make better decisions.”The best insights often come from the margins. Pay attention to the unexpected patterns and the stories that don’t quite fit. Edge cases may be small in number, but they are so often the canary in a coal mine of other use cases you haven’t run into yet.Context beats numbers every time. A story about how someone’s work got easier is worth more than a dozen charts showing efficiency improvements.Small signals matter. Sometimes the biggest changes start with tiny shifts in how people talk about their work.Answering the Tough Questions

Here’s what people often ask me about measurement:

“How long should we measure baseline performance?” Long enough to see normal patterns, including the ugly parts. Usually at least a few weeks, sometimes months.

“When should we adjust our metrics?” When they stop telling you useful things about your work. Don’t keep measuring stuff just because you always have.

“What if our measurements show unexpected results?” Celebrate! That’s where the learning happens. Unexpected results often point to things we didn’t know we needed to know.

“How do we handle conflicting metrics?” Dig into the conflict – it’s usually telling you something important about how different parts of your organization see success differently. Take a documentation project where page views are up but so are help desk tickets. One metric says success, the other says problem. The conflict tells you people might be finding the docs but not understanding them – that’s useful information you’d miss if you only looked at one number.

Avoiding the Common Pitfalls

Let me save you some pain. Here’s what not to do:

Don’t measure everything You’ll drown in data and miss the important stuff. Be picky about what you track.Don’t ignore the lag Change takes time to show up in measurements. Be patient and keep watching.Don’t forget the humans Behind every metric is a person trying to get their work done. Keep that in perspective.Don’t skip the context Documentation without context is just numbers on a page. Capture the story behind the changes.The Path Forward

Remember, the goal isn’t to become a measurement expert. The goal is to understand if your sensemaking work is actually helping people make better sense of things. Sometimes that means fancy metrics, but more often it means paying attention and asking good questions.

Start small, stay curious, and always remember – you’re measuring to make things better, not to make things perfect.

If you want to learn more about my approach to measurement, consider attending my workshop on February 21st from 12 PM to 2 PM ET. “Reality Checks & Ripple Effects: How to Measure Before Change” — this workshop is free to premium members of the Sensemakers Club along with a new workshop each month.

Thanks for reading, and stay tuned for our focus area in March – Conducting a Heuristic Evaluation

The post The Sensemaker’s Guide to Measurement appeared first on Abby Covert, Information Architect.

 •  0 comments  •  flag
Share on Twitter
Published on February 13, 2025 23:25

Abby Covert's Blog

Abby Covert
Abby Covert isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Abby Covert's blog with rss.