At a recent client, I assisted in implementing AI-based decision-making software. The software was straightforward to use – having a very minimal interface; it was designed to take on the burden of making decisions based on data instead of human intuition—one of those situations where people might feel that software might be taking someone’s job.
The software implementation itself was easy – the tricky part was change management (isn’t it always? If you ask me, the tech is always the easy thing – technology can do almost anything – it’s the people around the technology that you have to convince). The task that the software was to automate was a skill that the personnel at this company had prided themselves on – they were experts in the field, and no software – no matter how sophisticated – was going to make their decision for them – even if the decision led to higher profitability.
Our task was to make the users understand that the data-driven decisions that the software was making would lead to more profits if they would just trust the system to make the decisions for them. I used the analogy of moving from a sometimes reliable Ford to a self-driving Tesla, which defaulted to self-drive, but you could grab the wheel if you felt like you needed to.
The software used a massive set of data to drive decisions. There is no way a human would be able to look at all of those data to make the decision – there was just too much data to sift through and too many data points to account for. When the system was allowed to make decisions, it made them well. But in many cases, it never got a chance to – for some users; the trust was just not there. How could they assume that software could replace their years of experience?
We needed to build trust. We needed to answer the why question.
This is an issue with most software that purports to make better decisions than humans, especially anything AI-based: humans need to know how the software made the decision, what factors went into the decision, and how the software weighted what it needed to make the decision.
In short, we needed transparency. But it’s not just an AI issue: in any change management situation, the best way to improve the adoption of new software or processes is to be transparent about it to your users, even if you don’t want to reveal all of the “secret sauce” behind the decision, at least some explanation of the why will go a long way towards improved adoption of the change.
Humans have two aspects that are in constant conflict: on the one hand, we want to do the least with the most; we love optimizing our time, minimizing the things we hate to do, and maximizing the things we like to do, and on the other hand, we hate change. I think our hatred of change supersedes our “laziness,” but if we can be convinced that the change is beneficial, we can be convinced to go with it.
Answering the why question, even partially, can provide your users with enough confidence that the change might be a change for the better. If you can’t answer the why question easily, then going some of the ways is better than nothing.
We eventually suggested that the vendor make some changes to help answer the why question right in the interface, and adoption soared. The lesson learned: embed the answer to the why question, and your changes will be a lot smoother.