HomeData EngineeringData NewsMaking better decisions with iterative analytics

Making better decisions with iterative analytics

When we ask questions about our data, we rarely walk in a straight line to the answer. Instead, data analysis often follows a more circuitous, creative process: we ask a question, discover our first effort to answer it is incomplete, reshape our data, bring in new data, consult a domain expert to rework our hypothesis, adjust our question, and so on.
We refer to this process as iterative analysis—though most practitioners will just call it analysis. This way of thinking is intuitive and natural for most people—so natural, in fact, we often don’t even recognize we’re doing it, in both our professional and personal lives. But whether we recognize it or not, it’s necessary to answer the undefined questions that sit at the heart of any big decision.

Take, for example, common questions that business leaders ask. Marketers might wonder what predicts high purchase intent. Customer success managers might ask which characteristics or behaviors signal that a customer is likely to succeed or to churn. And product managers may ask if the team should spend time fixing bugs or building new features. To answer these questions, we need iterative analytics.

Should the product team spend time fixing bugs or building new features?

Suppose your customer support and renewal teams want your product team to invest time into fixing bugs. Customers are getting frustrated by bugs and it’s becoming a fire. Your sales team, however, says the bugs are edge cases and suggests the product team spends time on building new features, reasoning that this would let them close more new business, faster.

As a product leader, how do you make this decision? You likely can answer some basic questions pretty easily:

  • How often do we record errors?
  • How many customers see them?
  • How many support tickets do we get?
  • How many customers use features that we think are particularly buggy?

These questions help set context, but they don’t help make the decision you need to make. A few charts of error rates won’t help us understand the potential tradeoffs we have to make between investing in one direction compared to another. Instead, we have to analyze this decision from a variety of angles and consider all of that evidence together. To do this, we have to drill two layers deeper.

Our first step is to define the important considerations that are necessary for making our final decision:

  • Are bugs actually affecting the customer’s experience?
  • Do bugs, or related measures like support tickets, correlate with retention?
  • Are some bugs much more impactful to customers than others?
  • How valuable are the new features the sales team is asking for?

To answer each of these questions, we’ve got to go another level deeper.

For example, when we ask “Are support tickets correlated with retention?” we can’t just figure out if churned customers send more support tickets than retained customers.. We instead have to ask a series of more detailed question, each of which changes based on the answer to the previous question:

  • Do we need to make adjustments for customer size (e.g. Are larger customers more likely to send more support tickets? Are they more likely to retain?)
  • Are all support tickets the same? Are there some that we should exclude, or some we should weigh more heavily?
  • Does timing matter? Should we be more concerned about tickets from new customers? From those about to renew?
  • How do we adjust for tickets with good or bad CSAT scores?
  • Do customers who churn disengage with our product and send fewer support tickets despite having a bad user experience?

Each of these questions is a piece of evidence. That evidence, however, won’t necessarily all point in the same direction. To answer the question, “are support tickets correlated with retention?” we have to combine this evidence with our reasoning and intuition, and come to the answer that we think is best supported.

And that answer, again combined with the answers to similar questions, as well as your own reasoning and intuition, rolls up to help us decide if we should invest in fixing bugs or building new features.

Iteration helps you get a more complete picture

This process—answering multiple layers of questions, looping over each, rolling them up to a final set of inputs, and weighing all of them together to come to a final conclusion—is iterative analytics.

Data doesn’t change the foundational approach; it simply provides a powerful way to answer the inner set of questions. And even then, data won’t always answer those questions directly. You often have to combine different datasets together, find creative measures of seemingly immeasurable concepts, and cut and analyze results in different ways, with each new iteration offering a new perspective on your problem.

As an example, let’s take a step-by-step look at how we’d use iterative analysis to approach the question, “Are support tickets correlated with retention?”:

Establish the context

Ask high-level questions that give you a rough understanding of the problem you are trying to solve. Figure out how many support tickets are submitted per customer. Are they changing over time? Do churned customers submit more? Asking these questions will help you find that churned customers, on average, write fewer tickets.

Drill deeper

Ask questions that could explain your initial findings. Why do you think the trends are what they are?  Then ask a series of more detailed questions to investigate these hypotheses.

Say that the initial result—that churned customers submit fewer tickets—is counter to the intuition of the Customer Success team. Why might that be? Maybe there’s a hidden confounding variable, like customer size or time as a customer. Bigger or older customers might ask more questions but are also retained at a higher rate.

Additionally, is there any segment of customers for whom this is not true? Pull in some dimensions about the customer and create a few quick charts to see if the relationship holds across different customer cohorts.

Digging deeper in this scenario solidifies the point that across many segments of customers, churned customers write on average fewer tickets.

Combine data with human reasoning

Digest and analyze the data collected from the previous questions.  Understand that the evidence won’t necessarily point in the same direction. You’ll have to use your reasoning and intuition to arrive at the best decision.

Why are retained customers writing more tickets? Maybe it’s because they spend more time on the product, and as a result, find more bugs. Let’s explore this. Does our result change if we control idle session time? This might require introducing new data. This time, it’s complex enough that the best way to check the relationship is through regression in retention.

In this example, even after controlling for idle session time, we still see a positive relationship between retention and support ticket volume. Retained customers submit more support tickets per active minute than those who churn.

Combine different datasets together

Consider what’s not being measured in your current dataset. It may be that the answer is hidden in data you haven’t explored yet.

We have very high support customer satisfaction scores (CSAT). Could people who write into support have such a positive experience that they forgive us for the bug? Is there a correlation between CSAT scores by customer and churn rates and support tickets? Pull in CSAT scores, and look at average CSAT scores by customer.

Asking these questions in this scenario finds that churned customers give much lower CSAT scores. That’s promising, but it’s hard to understand the causal direction. From the previous analysis, CSAT scores are known to be much lower for new users.

Keep iterating

When you find threads to pull on, keep digging to understand them better. Iterative analysis is like solving a mystery—there’s rarely a smoking gun. Instead, you find answers by diligently following clues one at a time.

In this example, you can look at the relationship between CSAT scores, support tickets, and churn. CSAT scores are low for new users, so perhaps their experience is particularly bad. Rather than solving all bugs, could we just focus on those that affect new users?

Doing so can help to understand which bugs cause new customers to write and address those bugs. And we now know we can reduce churn if our support team prioritizes questions from new customers.

In this scenario, should we put a few engineers on this core set of bugs, and the rest of the team on new features? Should everyone work on bugs? To answer those questions, we return to our original larger loop question set, and work through them much as we did this one. This process — answering multiple layers of questions, looping over each, rolling them up to a final set of inputs and weighing all of them together to come to a final conclusion — is Iterative Analytics.

In conclusion

Iterative analytics gives product analysts a better understanding of their users, and thus, a better way to prioritize the product team’s work. Using iterative analytics in combination with product analytics and BI tools allows product teams to make better informed decisions and deliver results more quickly, thereby driving faster product-led growth with less risk.

This article has been published from the source link without modifications to the text. Only the headline has been changed.

Source link

Most Popular