Interest in AI is growing with annual investments beyond $10 billion. Hackers can make 3D masks for $200 able to confuse Apple’s latest 3D face recognition system, but this does not undermine popular trust in AI. Instead, many experts are suggesting that there is a need for deeper work on how to use AI in organizations.
This research consists of of in-depth interviews of tens of AI experts involved in AI adoption processes within firms. They told us the various reasons for adopting AI, such as learning, boosting sales, process optimization, and cost reduction. When we asked about the narrative behind successful AI projects, the interviewed experts indicate a logic that is focused on individuals and teams and goes beyond profits. We focused on what made AI adoption successful in a company and these were three main features (which interestingly have little to do with technology).
Our interviewees consistently mentioned AI’s ability to spot abnormalities and highlight them almost instantaneously, but also the importance of working together with humans, (i.e., this is the key to achieving market use for AI tools). Fostering group actions for an AI leader means understanding the strengths and weaknesses of teams and individuals, and therefore being aware what a team needs from outside. A team adopting AI-powered tools must be able to face the lengthy initial constraints due to model iteration. AI differs from previous disruptive technologies because of its evolutionary nature: AI tools get better at each iteration, but these learning mechanisms involve coders in lengthy and painful processes. It is like starting to train for a marathon – a process that initially suffers from little positive feedback because of the slow increases in speed and quick increases in pain.
A typical mistake is feeding an AI tool with biased data, for example, using a database rich in quantity and with an extended period of collected observations – but targeting only one group (i.e., only white male graduate employees). Instead, it is necessary to target women, minorities, and non-graduates, and so algorithms must be audited for diversity or be written by diverse coders.
The implementation time required to iterate AI models is lengthy and cannot be led by machines, so humans have a key role. Our experts describe the process as similar to physical prototype creation. For example, to enable mathematicians to convert poor sales into a data problem, data are used to train the model and sales team to produce the insights needed for increased sales. This research indicates that the projects mentioned by the interviewees require at least two years of development before being put into production and many individuals are involved in the process. People, time, and resources are required to align the available dataset and the hardware & software to transform “bits into action”. Because “bits without action” can be fruitless, as in the infamous $62 million IBM Watson project with the MD Anderson clinic. In this case, an alliance tried to solve skin cancer by using AI, but the project was cancelled by the University of Texas system administration because “it lacked an external advisory group”.
Therefore, iterating AI models is still a group challenge despite the task being about cognitive power – rather than just software. Adopting AI-specific tools within a company needs to be understood as a group led technology and not as an ideology that ignores different points of view.
Individuals in all the companies we interviewed indicated that AI-powered tool adoption is balanced between profitable projects boosting revenues and exploratory open-ended fun projects that boost coder motivation. Our interviewees said that working within a group means avoiding hyper-specialization traps (a tendency shown to be toxic as it generates monotony that impairs a company’s ability to face eclectic and well-motivated competitors). A group AI culture is also focused on cross-mentoring: not just seniors-mentoring-juniors, but also the opposite to spread future ideas and rather than just experience. Coders in charge of information flow must talk to businesspeople taking care of profit flows. This is a contact between different worlds, like Bahcall’s concept of mixing ice with water (as solid and liquid states exchange energy when in contact). But to preserve their own states, ice and water must stay separated at different temperatures. So let coders, businesspeople, and other stakeholders each work in their own environments, but also create conditions for them to occasionally work together. A great example is the insurance company Anthem – which uses a holistic approach that maximizes the value being generated by cognitive applications by organizing frequent short meetings.
Our research suggests that group actions in AI adoption require mastering algorithm iteration, balancing motivation between challenging and profitable projects, and sharing insights, models, and data across teams/departments.
The experts interviewed indicated the need to start any project with a clearly defined problem to solve. They propose first establishing the problem to resolve and then reverse engineering it. But interestingly, this approach is not common because business people facing competition are interested in the relevance and not in the accuracy of the prediction. Interviewees creating the algorithms are only looking for accuracy and theoretically attainable precision, as Andrew Ng explained in an interview. The theoretical desktop test he mentions is a threat for AI with endless examples, such as Google Health, that failed to show in real life what was forecast in the simulator, and where the “quality of images that the nurses were routinely capturing under the constraints of the clinic caused frustration and added work”.
Models help us fight against the resistance of things that do not behave as we expect, and so a model is an attempt to predict unpredictable behavior. In the AI hype, many of the customers of our interviewees request fast and accurate AI predictions, but aligning strategy and technology halts both relevance and accuracy. Strategy and technology should be intertwined constantly in AI and yet the challenge for each company is to be problem-centric, not solution-centric. Our data suggests a scarcity of problem-centric teams, with technology and problems prioritized over strategy and solutions.
Our data suggests that in the relevance vs. accuracy race, AI models need both and should be simplified by focusing only on problem-centric features, such as image-based data. The experts interviewed dealing with image-based products, such as fashion and cosmetics, are dealing with pixels that are easily crunched by machine learning, and so have an easier life than manufacturers building and implementing fast and accurate AI models. Relevance in AI could make a difference by addressing value for the client, instead of results for the coders. In other words, starting with the end in mind, and only using AI when the problem is clear and the solution can create value. For example, Equilips 4.0 from Asquared, delivers value to customers by replacing expensive crash-tests. It offers equipment testing through AI listening instead of testing through weld stressing in manufacturing devices. It is a relevant approach that uses a single and finely tuned feature: the sound of the working device for detecting imperfections from within, and so eliminating the need for crash tests.
Machines need to be trained by humans (this is also true in the case of unsupervised learning – which is widely used only in specific contexts) because of the overall low quality and variety of data, as well as model limitations. Humans only can culturally train the machine: in other words, it is about teaching the machine the ability to interact with humans in a specific context, such as autonomous vehicles interacting with pedestrians. In fact, autonomous driving systems are now helped by humans to learn how to classify pedestrians as members of cultures according to features such as the shape and color of garments. These different cultural styles may indicate specific behaviors differentiated by assertiveness, politeness, and other road interactions styles (i.e., Italian vs. Japanese pedestrian road-crossing style). Another example of style in AI is what gave an impulse to the creation of the chatbot Replika, initially created and trained by founder Eugenia Kuyda to have the same chatting style as Roman, a friend who had died. Kuyda wanted a chatbot that enabled her and her friends to keep chatting as if they were with Roman. Now Replika’s (and Roman’s) empathetic style has been appreciated well beyond Eugenia’s circle of friends and it has become a chatbot for sharing personal thoughts about a specific subject.
Despite AI being suitable for pattern recognition, interestingly, it still needs humans to refine behavioral style. Humans share a unique ability to spot subtle details, which we are largely unaware of until we meet agnosic people who pay the same attention to facial expressions as to objects. Only after such a meeting do we realize how empathetic we can be.
Non-coders share more positive beliefs about the potential of AI than coders (who are typically very pessimistic about AI). The challenge for AI is how to make the AI team understand the business, instead of the business understanding AI. It is about the language, and therefore the style leaders wish to instill in the team. Empathy in AI, within AI or by AI, would open new business opportunities (such as Bombfell – an AI-based software that chooses clothes for men). This study suggests a focus on coder-business cultural exchange that would help finally break the IT legacy within companies and teams and contribute to the move towards using AI in organizations. We need to cherish human creativity and empathetic skills to instill a cultural style in AI teams.
Join the Do Better community
Register for free and enjoy our recommendations and personalised content.