The intelligence gap is widening
We surveyed 1,018 global executives to better understand the application and impact of AI adoption through the COVID-19 crisis. Respondents were split fairly evenly between those who said COVID-19 had a negative impact on their business and those who reported a positive impact,2 with larger companies (with sales over US$10 billion) more likely to benefit. These larger companies — nearly four in ten — invested more in AI development before the pandemic began and are moving away from testing to using AI. They also report that they have made a return on their investment in AI during the pandemic and are more likely to increase their use of AI, discover new use cases for AI, and train more employees to use AI. .
We find the same is true of smaller companies that invested heavily in AI before the pandemic. Furthermore, an in-depth examination of AI dynamics in India shows that early adopters benefit from better decision making using AI, leading to health and safety. of employees and customers is elevated during the pandemic. That research also shows other benefits, such as improved productivity and design innovation through the adoption of AI-powered tools. (For more see AI: Opportunity in the midst of crisis.)
The overall picture is a productive cycle for those heavily invested in AI prior to COVID-19, a cycle that tends to widen the intelligence gap. More complete AI adoption organizations increased use of AI during the pandemic grew by 57% – more than double that of early-stage implementers – and they plan to increase investment and adoption going forward. Conversely, the downward cycle affects companies that are not investing, are underperforming, and are struggling to find funding for AI. A good place to start in reversing this dynamic is to better understand the impact of AI efforts. Leading companies create AI-targeted ROI metrics, potentially clarifying the full range of use cases and tailoring them to these ROI metrics, and thus achieve more acquired from senior management.
Deploy AI models in action
The ability to operate AI effectively – what we call AI maturity – will be key to sustaining progress among leaders and bridging the gap between latecomers. Our survey allows us to group companies into three levels of AI maturity: those with fully embedded AI (25% of respondents), companies that are in beta implement AI (55%) and companies are still exploring AI without implementing anything (20%).
Embed leadership. Those who have had fully embedded AI has often already done so in their business processes and is widely adopted. Many of these companies have deployed 10 or more AI applications, ranging from customer-focused applications (such as chatbots and conversational systems, demand forecasting, and customer targeting) to Office applications, including contract analysis, invoice processing, and risk management. Others have deployed five or more AI applications. It’s no surprise that many larger companies (almost 34%) have fully embedded AI. Consolidating our findings on benefits, we find that these companies with fully embedded AI have outperformed their counterparts during the pandemic and are also investing more into AI, towards further improvements in the post-pandemic world.
Scale up for profit. Fully embedding AI across the enterprise and across all functional areas is a significant challenge. As companies move from building stand-alone models (as an AI platform), to capturing value using AI to better foresee changing business conditions (via tools prediction as a service), to harnessing the full power of AI by automating and tracking activities in model factory and beyond, they will need to invest in a range of capabilities, including:
domain experts from business units to articulate use cases
data engineers and data scientists who understand how information flows and can build machine learning models
systems analysts and software developers who can build software systems, along with machine learning engineers who can optimize models for added value
ModelOps, DataOps, and DevOps experts can maintain embedded AI models
governance and ethics support initiatives to enable effective management of these systems.
Bringing together talent, processes and models, as well as agile, nimble to adjust the AI system as needed, is the key to locking the scale. As our research in India has shown, those skills will allow companies to target the most promising business use cases, easily transitioning from pilot to widespread deployment. and deliver the promised strategic benefits of AI on growth and resilience. Similar work shows that successful companies can strengthen their competitive advantage by more effectively personalizing customer experiences, offering tools for dynamic pricing, using intelligent systems that automatically protect against fraud and use virtual assistants to leverage employee knowledge and skills.
Manage risk, build trust
As companies gain momentum in deploying AI models and systems at scale, we’ve seen another gap emerge: different capabilities for identifying, mitigating, and managing risk. ro AI. These risks are intertwined in areas such as bias in hiring models, customer privacy, transparency in the use of AI (which requires both accountability and explainability of agencies). processes and results) as well as the security of data and systems. In our survey, only 12% of companies (and 29% of those with a deeply rooted AI approach) managed to fully embed management and control measures AI risks and automating them enough to achieve scale. 37% of respondents reported strategies and policies in place to address AI risks.
When we asked about the specifics of a risk management strategy, we found that algorithmic bias in modeling (often related to race or gender) was at the heart of nearly 36% of totals. respondents and nearly 60% of companies have fully integrated AI. Reliability and robustness of models, data security and privacy are among other AI risks that are more addressed by companies that have successfully scaled up their AI efforts.
Managing full risk across the AI horizon will require better tools, starting with Responsible AI framework to assess the necessary steps and the possibility of taking them AI Risk Assessment. With those as a foundation, companies will find it easier to adopt best practices and governance as they build, deploy, and monitor AI software and use it for decision-making. . Starting this journey sooner rather than later will allow leaders to gain customer trust and better navigate upcoming regulatory changes. Doing so will also expand the competitive advantage these leaders are enjoying from AI.