In our previous article, Customer Churn Prediction and Prevention Using AI, we pointed out the role of AI in predicting and preventing customer churn. We mentioned that it can cost between 6 to 7 times more to get a new customer, when compared with retaining an existing customer. Since a 5% increase in customer retention can lead up to a 95% increase in company profits, organisations may greatly benefit from investing time, energy, and resources in developing cutting-edge churn prediction models.
However, building a custom churn prediction model can be extremely costly and would require recruiting subject-matter-experts. Often, these models also need intensive computing capacity. Meeting these financial and resource demands to develop a churn prediction model may not be viable for any and all organisations.
Developing a churn prediction model with evoML
TurinTech is the leader in AI optimisation. Our evoML platform helps both tech and business experts develop efficient and explainable AI solutions for various business problems through a simplified process. In this article, we discuss how evoML can be used to develop and optimise (further improve the performance) a churn prediction model.
The dataset and the business problem
For this analysis, we consider a customer churn dataset from Kaggle (originally an IBM dataset). The dataset provides data on a fictional telco company that offers home phone and internet services to customers. It captures information of 7,043 customers, and contains 21 columns. These columns (e.g. customerID) are features of the model. evoML’s data visualisation functionality enables users to get a deeper understanding of the dataset and features. Figure 1 gives a snapshot of the dataset uploaded to evoML. Our business problem is to use this dataset to build a model that can predict whether a given customer would churn or not. As such, Churn is our target feature.
Figure 1: An overview of the features from the dataset uploaded to evoML
We see that the features roughly encapsulate information on two aspects of the client:
- Customer-based factors. Features such as gender, SeniorCitizen, provide information about the demographics of the customer. When incorporated into the model, we can use these features to find trends in customer-based factors that contribute to churn.
- Service-based factors. The dataset also captures data about the customers’ use of the service. For example, features such as MonthlyCharges, Contract, and PaperlessBilling give information on the different aspects of the telecom service used by each consumer. This information will allow us to see how service-based factors contribute to customer churn.
evoML automatically calculates correlation statistics to measure the extent to which each feature is correlated with the target feature. The platform also generates interactive visuals which allows users to get a better sense of the interactions between features.
In this dataset, Contract is a feature that has a high correlation with Churn. As we can see from Figure 2, the longer the contract length, the lower the percentage of customers who churn.
Figure 2: Percentage bar chart for association between contract and churn
Upon going through the dataset as we did above, the conventional next step would be to start preprocessing data and making them machine learning-ready. Feature preprocessing is one of the most important steps in developing a machine learning model. It consists of data cleaning, feature generation, and feature selection. But manual feature preprocessing takes hours, and is difficult and tedious.
With evoML, we can get our data ML-ready in minutes. evoML automatically transforms, selects and generates the most suitable features for a given dataset. Inspired by Darwin’s theory of biological evolution, evoML applies genetic algorithms to evolve optimal features through multiple generations.
Figure 3 shows how we have used our dataset and the target feature to carry out this machine learning task.
Figure 3: Starting the model-building process on evoML
To improve model performance, evoML has dropped some of the least important features and generated refined features from the existing ones. We see that from the original dataset with 21 features, evoML generated 50 features in the final ML-ready dataset (see Figure 4).
Figure 4: Overview of evoML data preprocessing
In the case of churn prediction, we want to know which customers are churning, but we might also want to know why they are churning to help us improve marketing performance. This means we need to tackle accuracy-explainability trade-offs when building churn prediction models.
Powered by TurinTech’s proprietary research in evolutionary optimisation, evoML enables multi-objective optimisation which allows users to build the best possible model for their business problems by giving them the option to choose more than one metric to evaluate. Users can choose a set of criteria, such as accuracy, execution time, or explainability. Essentially, this means users are able to easily tackle difficult trade-offs with a few clicks.
For this dataset, we have selected ROC AUC (area under the receiver operating characteristic curve) and explainability as metrics to be optimised during the model selection and tuning process.
Building and evaluating the models
Once we initiated the model building process, it only took two minutes for evoML to generate and optimise 200 different models in parallel. Conducting this process manually could take hours or even days. Based on ROC AUC and explainability scores, evoML ranked all the models and recommended a tuned logistic regression classifier (shown in Figure 5) as the best model to carry out this churn prediction task. Based on the model’s performance on validation data, the ROC AUC of the best model is 0.864. Additionally, we see that the the model’s precision is at 0.748 and that the model’s F1 score is 0.727.
Figure 5: Metrics of the best model
We can also evaluate the model’s performance on unseen (test) data. According to the confusion matrix in Figure 6, of the 374 cases of actual ‘yes’ (i.e. customer will churn), our model has predicted 202 as ‘yes’. The model’s predictions are far more accurate in the case of predicting ‘no’, where of the 1,035 cases of actual ‘no’, the model has predicted 935 of them accurately.
Figure 6: Confusion matrix for the best model
According to the precision-recall curve in Figure 7, the model performs better than the baseline classifier at lower levels of classification threshold. In the case of predicting ‘no’, the model consistently performs better than the baseline classifier for all levels of classification threshold. In the case of predicting ‘yes’, the model’s performance dips below the baseline classifier after a classification threshold of 0.04.
Figure 7: Precision-recall curve for the best model
We can use the evoML insights to decide if the model suits our churn prediction goals, and choose to deploy the model.
Deploying the model
In an earlier blog article we pointed out how taking a model from conceptualisation to deployment can be a rather time-consuming and cumbersome process. However, with evoML’s a single-click deployment feature, models can be deployed with far greater efficiency, ensuring that organisations can drive value from data in real-time. evoML also provides a source code download feature for users to have full transparency. This also enables manual customisation of the model (see Figure 8).
Figure 8: evoML’s model deployment and source code features
As we show above, building and deploying a churn prediction model with evoML is a straightforward and efficient process. evoML enables organisations to accurately predict customers with a high likelihood of churn, so that the client-base can be successfully retained with churn prevention strategies.
About the Author
Malithi Alahapperuma | TurinTech Technical Writer
Researcher, writer and teacher. Curious about the things that happen at the intersection of technology and the humanities. Enjoys reading, cooking, and exploring new cities.