Quick wins, substantial wins, and big wins in big data and AI deployment follow different patterns of identification and implementation. Knowing these patterns makes it easier to perform a proper expectation management.
Quick wins with big data and/or AI re not as quick as people expect. Usually it takes two to three months to identify and specify them, plus a few weeks to implement them. Implementation is usually easy, and often frustratingly easy. The results may vary broadly, and they are usually not clear from the outset. They require an open search best performed jointly by big data/AI specialists and people from the company where to deploy big data/AI who know execution there very well. What it needs to deploy quick wins with big data or AI is thus first and foremost time to identify the use cases.
There are numerous examples of companies who failed on quick wins because they were not willing to invest enough time. Some of them even invested several millions in technology, but then failed to use the technology as they simply could not identify any problem to be solved with the new technology. There are also companies who failed to deploy the technology because the solutions were developed by the data science specialists alone. As it turns out, embedding big data or AI into the business processes of an organization is less trivial then data scientists think. Like mathematics does not suffice to build a great IT tool, IT tools do not suffice to create a great IT solution, and IT solutions do not suffice to create a successful usage of IT. You may fail on the modeling of the world, the choice of heuristics, and the situated design of the core algorithm, but you may also fail on the coding of the algorithmic solution, the design of its user interface, the coding of the application, and the introduction of the IT solution in business practice.
Consultants may help organizations if they collaborate with domain and execution experts from the company to identify and specify the quick wins. On the contrary, consultants who focus on collaboration with the top management are usually useless. The good consulting practice here is to identify the case(s) with those who have the practical knowhow and then to explain and sell the case(s) to those who have the power. The result is a win for everyone which pays off for the organization, but which is not a game changer.
Substantial wins are different. They rarely start with an open search as most quick wins do. To the opposite, usually for substantial wins there is a clear idea from the very beginning and research is needed to validate the feasibility of the idea. In most cases the concrete idea addresses decision making, process control, or specific investigations and it tries to improve quality and reduce costs at the same time. For example, diagnoses in healthcare can be semi-automated, either to support specialist or to enable amateurs to carry out a first check in case that no professional checks are established as standard.
While substantial wins that have been carefully thought through nearly always work well to some extent – unless you totally fail on data procurement – in most cases they are nevertheless not technologically feasible, because they do not work well enough. For example, there are lots and lots of laboratory experiments with healthcare diagnoses that are not good enough to be implemented in practice. Accuracy may turn out to be not good enough, but even if automated decision making outperforms the best specialists, irrational fear, rational liability risks, and ideological thinking against machines often block implementation. Accuracy can be improved with better algorithms and more data, legal risks can be mitigated with a professional set-up of research experiments and later of the practical implementation, and fears can be met with fair, comprehensive, and appealing communication. What it needs to deploy substantial wins with big data or AI is thus good data, advanced algorithmic knowhow, research skills, legal expertise, strategic communication capabilities, and more time. Thereby, research skills, some fortune, and a lot of time are critical.
From an organizational point of view, looking for promising results from research laboratories is a possibility for strategic investments into substantial wins. This reduces the risks of too many failed experiments. It enables you to start your pilots with a profound understanding of feasibility issues and it helps you to shorten and parallelize piloting activities. However, it is not that easy to shop working solutions from research laboratories as it might look like. As they say, God is in the details. There is good reason why nearly no laboratory successes with machine learning (i.e. artificial intelligence), say in healthcare diagnoses, are ever transferred to practice.
Big wins are something else entirely. They neither start with an open research nor with a very concrete idea, but with a disciplinary vision – or an entrepreneurial or government vision. To the best of our knowledge, the starting point should be the collection of wishes hidden in the heads of experts, which can be combined into a vision that does not only address higher efficiency and better quality, but that first and foremost addresses new disciplinary options, or entrepreneurial or government options, respectively, that extend the state-of-the-art without contradicting it.
In research such visions appear quite often in paradigmatic form and/or in relation to emerging political perspectives, but they appear rather rarely with respect to disciplinary competences – that is if we ignore the delusions of disciplinary grandeur prevalent in comparisons of the own discipline with the others. In professional practice the situation is slightly different. While it is rarely beneficial for your career to issue ideological visions of your professional discipline, it sometimes pays off to invest into the development of new competences. In addition, since labor costs are key in many sectors, entrepreneurial initiatives for automation influence professional disciplinary thinking, too: They creates free time for new options and they put pressure on the specialists to use this free time for new services they can bill, as remuneration for existing services are reduced when necessary efforts are decreasing.
As a consequence, if you perform directed design thinking – which may sound as an oxymoron, but it actually improves creativity – with disciplinary experts from practice to focus the ideation on new disciplinary competences and if you involve big data and AI modules into the material construction of solution ideas, you may develop disciplinary visions relying on the deployment of big data and AI. These visions will create options for big wins, though their implementation will require ten to twenty years. What it needs to deploy such big wins is thus a merger of disciplinary-free digital transformation practices with a holistic domain expertise, lots of resources, enough talents, visionary inspiration, a culture of disciplined agility, and nearly unbounded stamina.
The key lesson
The key lesson is, that innovation through big data and/or AI takes a lot of time – anything between a few months and (presumably) a few decades. In all cases it requires sensemaking, that is diving into the challenges and constraints of real business life. In more promising cases, furthermore research skills and the ability to create visions become necessary. Indeed, the bigger the gains, the more skills are needed. Holistic, transdisciplinary thinking is always helpful, but it is a must for big wins. You should keep this in mind if you perform expectation management for your big data and AI projects. They do not work without domain knowledge and creative thinking. Knowhow on big data and AI is necessary, too, but unfortunately it is hardly ever sufficient.