もっと詳しく

Most enterprises that implement AI solutions have learned a bitter lesson along the way: The path to organizationwide AI adoption is far from simple, intuitive or easy.

Arguably, the hardest thing about it is the lack of clear guidance. There’s no “how-to” handbook for enterprisewide AI adoption. The absence of a simple, best practice guide has deeply frustrated companies all over the world over for the last decade, resulting in billions of dollars (in both direct investment and in people hours) going down the drain.

The “AI guidebook” wasn’t written yet, because it simply doesn’t exist. These two letters, “AI,” can mean natural language processing or computer vision or time series analysis — each of which can be useful across a broad range of use cases. Combine this with the diversity of organizations that wish to deploy AI, each having their specific data, business needs and pain points, and you get an immensely diverse universe of AI solutions.

Besides shortening the lab-to-field path, a production-centric approach enables a constantly scaling AI implementation with evergreen solutions that yield sustainable value.

So, instead of trying to come up with a universal guidebook for enterprise AI adoption, it’s probably more beneficial to define and tackle the critical elements in deploying these solutions.

The three barriers for enterprisewide AI adoption

AI’s potential business value is immense. It can be used to automate processes, streamline operations and improve product quality. In fact, the promise of AI stands apart from almost all other technology we’ve seen in the past. However, realizing this value requires overcoming three serious barriers: time to value, profitability (and costs) and scale.

Traditionally, the industry benchmark for the duration of an AI project, from initiation to production, is between 12 and 18 months, and requires employing a large team of researchers, ML engineers, software and data engineers, DevOps, QA, data scientists and product/project managers. Having this team onboard entails a huge TCO (total cost of ownership).

The obstacles don’t end there: Once the AI application is deployed, it requires ongoing maintenance to keep the solution “on the rails” and handle the inevitable data drifts, which can easily throw off the trained model. Even once the maintenance costs are accounted for, all this investment covers a single AI application.