In late February last year, I was in London at a conference. On the last day, Canary Warf was shut down and ostensibly evacuated due to this new virus we had heard about, CV19. Fast-forward a year, knowing what we know now, this seems silly – as if there was a big blob of virus roaming the streets looking for victims. It seemed that if we could just get away from that place, we could be safe. And people rushing where? To the tube, a bus, crowded streets?
I caught an earlier, fully laden train back to Winchester, and was the only person wearing a mask. People were staring at me as if I was the virus. In truth, about halfway into the journey I took it off — no virus in Hampshire, right?
Human history is full of surprises. Yet we rarely see them coming. From global pandemics, which occur about every 30-35 years, to a single person facing serious vulnerability issues every day.
The data of the unknown
AI is all about data. For machine learning algorithms to work, they need lots of it, and it needs to be relevant. In the CallMiner Research Lab — the theoretical AI research group in CallMiner — we spend more than half our time ensuring the data we provide our models will lead to accurate results. When done correctly, the data can provide insights into the “why” of the unexpected. The data can tell us things we did not know, even things hidden in plain sight.
But sadly, in most cases, AI can only predict a tomorrow that looks a lot like today. In the case of a pandemic, we didn’t have relevant data, as we had not had enough of these in our relevant data gathering life to predict them clearly. Could we have predicted the pandemic with the right data? The answer is, scientifically, “probably yes”, but to understand our hesitance, it is important to differentiate between predicting hidden events and predicting unknown events.
Predicting the hidden
There is a difference between hidden and unknown – like the difference between a game of “hide and seek” versus a treasure hunt. For hidden entities, we know they are in the data and the task is to find them. Predicting hidden populations in a larger population is a perfect game for AI to play.
An example of a hidden population is people who are in vulnerable situations. The Financial Conduct Authority, a UK regulator, reported that 27.7 million adults in the UK have characteristics of vulnerability, such as poor health, low financial resilience or recent negative life events. Further, UK unemployment is likely to reach 2.6 million by the middle of 2021, 4.1 million people are temporarily away from work, nearly nine million people had to borrow more money because of the pandemic, and it’s possible nearly £5-billion in loans may never be repaid.
Those are staggering numbers and are not exclusive to the UK. So, how do people stay hidden?
Having worked with the collections industry for a number of years, I have learnt that humans can quickly adapt to their situations and hide things from the outside world.
So, if we can build models to identify people in vulnerable situations with mathematical precision, why can’t we solve it? The answer is: because we are human. There are four things that need to occur to help someone move to a less vulnerable situation:
- Accurately identify them.
- Compassionately approach them in a way that they are willing to listen and admit the need.
- Offer relevant plans or options that are better than living with the current situation.
- Extend a loving hand to help them through the change or unknown.
The process of change is the business model of every collections firm in the world. Helping someone resolve a debt is a big dose of change. It’s how firms go about enabling this change that leads to success of everyone involved.
To learn more about how organisations can empower every individual across the enterprise with the ability to identify and adapt to change, read this white paper entitled “Democratising Your Customer Data: Unlock Business Value from Contact Centre to C-Suite”.
Predicting the unknown
There is a second, much more difficult problem for AI – predicting the unknown. Unknown entities are different than hidden ones because they may not exist in the data. Could we have predicted the current global pandemic? Or can we predict the next one? We believe the answer is yes, but we also believe there is value in weighing human-based logic and intuition over the results of AI-based predictions of the unknown.
So, why won’t we predict things with massive social impact? The answer is actually the same as above. We are human. Borders are not open, politics are real, herd mentality exists and, worst of all, there’s ignorance.
Belief is the real issue any change agent must deal with, and it is one scientists face every day
Recently, humans landed a rover on Mars, 35.4 million kilometres away. They did so to within 4m of the target landing point. Talk about a lot of unknowns. It took an estimated 400 Nasa employees and 300 additional scientists to get this far and cost the US about £1.8-billion pounds in direct costs, without adding in volunteer time. We can predict a pandemic if we have the data, the time and the treasure. But we don’t have all of that.
The best bet is to prepare for an array of unknowns. Humans can be pretty good at this. We are better off learning from this event to broadly prepare for the next one, no matter what it is. This is the same of the collections firms.
I predict they are about to receive a large batch of pandemic-related debt. Are we preparing?
Believing it
Even if we have all the data, and have the talent, and have the financial resources to predict something, people have to believe it and take action on it. Belief is the real issue any change agent must deal with, and it is one scientists face every day. There is a wonderful saying: “In the face of overwhelming fact, beliefs will win every time.”
There are debt collection firms that believe simply asking for money as fast as possible with strong language is the right way to be successful. Having worked with numerous collections companies, we can show facts that the longer the call, the more you talk about anything other than money, the more willing you are to have a real open compassionate conversation, the more successful a firm will be. But still firms believe that is not true.
Dealing with and resolving the vulnerability issues a person faces will always be more successful than banging on doors demanding money. Belief vs fact.
But it’s not too late for collections organisations to change. By using AI and ML to find incredibly small but vastly important things in conversations like customer vulnerability, collections can focus on facts instead of beliefs.
To learn more about how organisations can empower every individual across the enterprise with the ability to identify and adapt to change, read this white paper entitled “Democratising Your Customer Data: Unlock Business Value from Contact Centre to C-Suite”.
- The author, Alexandra Robson, is senior manager of international marketing at CallMiner
- This promoted content was paid for by the party concerned