Help! My Enterprise AI is cannibalizing itself

Navneet Singh
Aug 31, 2021 · 5 min read


Summoned to solve a mystery

A large SaaS company (we’ll call them SaasyOne to protect the innocent) recently decided to get serious about reducing customer churn and hired us to help. Results of their previous attempts had been unsatisfactory — and as we’ll see, confusing. Their first big attempt to reduce churn was in October 2020, when they hired a large group of Customer Success Managers (CSMs), who used rules of thumb to identify and contact at-risk customers. Results were unsatisfactory. Three months later, the company built a churn risk AI model, which they trained and tested on customer behavior data, e.g. product usage, support tickets, etc.

The resulting mystery was 2-fold:

  1. When they blind-tested the new AI model, it scored really well on the period before the CSMs were hired. However, it didn’t do nearly as well for the period when the CSM team was functional and used the rules of thumb to target customers.
  2. After SaasyOne launched the AI model in Jan 2021, they were disheartened to see that system performance got worse with each passing month. It identified increasing numbers of happy customers as at-risk customers while missing increasing numbers of actually dissatisfied customers.

It looked like the AI was reacting poorly to its own work and even the intelligent work of others. Which raised the question: Was the AI system cannibalizing itself, and being cannibalized by others?

Why is your AI losing its edge?

When your data contains customers that have been “treated” by the CSMs, what occurs is a form of data drift. There are two ways this can happen, which we’ll look at in a minute. But let’s first lay some groundwork.

Your simple churn AI model

When you train your AI model with historical data, you’re teaching it to find both healthy patterns in customer activity that are associated with retained customers, as well as unhealthy activity patterns that point to impending customer churn. For example, if the AI model finds that increases in support tickets and drops in product usage lead to a lost customer, it would form links between such activity patterns and the customer state called “Churn”. Once the model has been trained on the past customer data, it is ready to predict which current customers, given all their recent activities, are at risk of churning. Once you activate the churn prediction AI model, your CSMs act on the model’s warnings and work to save any flagged customers.

Ways in which treatment dilutes your data

There are 2 ways that our training dataset can end up containing treated customers:

  1. The churn AI model identifies at-risk customers. Once a customer is reached out to and acted upon, they become a treated customer.
  2. Even before a new churn AI model is put in place, there could have been acted-upon customers. Most SaaS organizations have a CSM team and they have ways of identifying customers to reach out to. Even if this is a simple method based on rules of thumb, the fact is that certain customers were acted upon.

Both of these played a role in the mystery scenario described above. Three months before the AI model was tested, many CSMs had been hired into the organization. They had started reaching out to customers based on certain red flags and treating them with offers, etc. And of course, once the AI model was put into action, the CSMs started treating customers based on the model’s recommendations.

Why should you care?

Datasets with treated customers can confuse your AI model. Suppose you have 2 customers with identical red flags, e.g. drops in usage and sudden surges in the number of support tickets filed. The first one ends up churning but the second one, which has been treated (e.g. with a 2-free-months offer), ends up staying. The AI model has been trained to find links between activity patterns and churn risk. If the treatment isn’t accounted for, the model sees 2 identical customer activity patterns with 2 different outcomes — Stay and Churn. It’s confused.

How to be relevant in the face of this data drift?

Once you’re aware of the presence of treatment, you’ll have a few options to make your AI model relevant again.

Exclude treated customers

If only a small portion of your customers have been treated, you can just exclude them from your training dataset. This only works for evaluating the accuracy of your AI model on historical data. It will never learn the impact of treatment on churn risk, hence never work well on a future dataset that contains treated customers.

Capture and account for the treatment data

This is the way to go if a substantial number of your customers have been treated. In fact, it’s a good ongoing strategy because it continues to add value once you put your AI model in production. If you capture the treatment attributes, you’ll no longer have the aforementioned puzzling case of 2 twin customers with identical behavior resulting in different outcomes. Your model will now see 2 different customers who’ve been treated differently. And then the model will learn, for example, that the second customer who had the same worrisome behavior as the first one but received a 2-months-free offer, ended up staying and why.

Change the approach to your AI modeling

Ideally, in the presence of treatment data, you don’t just want to use a churn prediction model. Instead, one approach you can take is to start with the objective of “saving customers” and work backward to what maximizes the chance of achieving that objective. In the literature, this is known as uplift modeling — You can learn more about it here.

Conclusion

You built your AI model well and you gave it a job which it’s doing well. You were totally justified in the premise that there’s a strong relationship between a customer’s behavior and propensity to stay a customer. However, this relationship weakens when additional factors like offers made to a customer come into play. If you’re unaware of these factors or ignore them, you run the risk of dumbing down your AI and making it less relevant. That was the case above when SaasyOne failed to account for treatment resulting either from the AI model or the previous rule-of-thumb one.

By capturing treatment data and using it creatively, you can re-educate your AI and make it effective at predicting churn risk again. For SaasyOne, we meticulously crafted AI model features that represented details of customer contacts made as well as treatment provided. Doing so not only put an auto-degrading AI model back on track but transformed it into an auto-improving one.