The increasing use of artificial intelligence (AI) topped most lists of tech trends to watch in the content management market when the 2019 predictions came out last year. Scanning the industry landscape at the mid-way point of the year, it’s clear that AI is front and center: pretty much every viable content management platform now contains some form of AI tools and machine learning capabilities.
With AI now a ubiquitous content management technology, users are increasingly familiar and comfortable with its presence and various uses. Content Services Platform (CSP) providers are rolling out new and more dynamic AI solutions, which in turn further proliferates the impact AI is having on the industry.
Until now, content management vendors often made AI for content available through outside providers – for example, through integration with public cloud offerings such as Google Vision. These offerings, which rely on algorithms that are trained using mass data from publicly-available sources, can quickly identify and apply three or four simple, generic metadata tags to help classify it. For example, you could post a photograph of Daffy Duck, and it would tell you it was a picture of a black duck with a yellow beak.
That was once the leading edge. But AI technology is evolving and improving at a very fast pace. Today, an increasing number of organizations are realizing the benefits of deploying more advanced, customized AI algorithms that are trained with an organizations’ own data as opposed to bulk aggregated data. This type of next-generation service, (dubbed “contextual AI”) has the ability to stack a countless number of classifiers, that are specific to the individual business, against an individual image or piece of content, which provides much more in-depth, relevant, and accurate metadata descriptions.
Now that enterprises have more than one way to leverage AI within their content management ecosystem, they’re increasingly cognizant of the difference between AI services — “generic AI” and “contextual AI.”
Previously, enterprises might have been given the option to integrate with existing AI providers (e.g. Google Vision, Amazon, etc.) This is the essence of generic AI. It offered a useful first-generation service, but it is also inherently limited. Generic AI models possess almost zero knowledge about your specific organization, for example, because the algorithms are trained on general data sets that can be broadly applied to support almost any business use case.
Fundamentally, generic AI involves some form of automated metadata tagging. A user can upload a document or image, and receive general information about what it is – a contract, for example, or a memorandum. If you upload a photo, it might tell you general categories of items within that photo – a car, perhaps, or a tree.
While limited, this AI still provides multiple information management benefits – it lets organizations streamline their information management processes while reducing storage, increasing security, and delivering faster and more effective searches for content and information.
Contextual AI allows users to train their own custom AI models using business-specific data sets. Suppose CBS was using such a modern CSP to manage its video files of, for example, the hit sitcom “Everybody Loves Raymond.” With generic AI, a popular episode that originally aired on May 5, 2003 – which famously involved Ray and Debra in a two-week standoff over a suitcase – would be labeled with the exact same metadata label as other episodes involving comedic marital disputes. Considering that the show ran for eight seasons, this is only marginally helpful. The metadata might only say, “Ray, Debra, disagreement.”
Contextual AI trained with custom data sets, by contrast, can distinguish between all of Ray and Debra’s humorous spats – and can do so over the length of the show’s entire eight-year run. The custom AI models would apply metadata informing CBS that “Ray, Debra, battle of wills, who is responsible, putting suitcase away,” rather than “Ray, Debra, argue.”
This level of detail is much more impactful and accurate than generic AI systems, and much more valuable to enterprises as well. What’s more, the system is easy for customers to use – they don’t need data scientists to build the machine learning models. Anyone within the organization can create a custom AI model for their own purposes. They simply click a button to tell the system what data to use and what fields to predict. And the models deployed can easily be refined as new data comes in. These in turn are used to intelligently tag new data as it is ingested into the system.
Content matters … especially with AI
AI has lived up to its billing as a 2019 trend to watch. The next stage for AI within in the content management sector is deploying the technology in ways that automate and enhance enterprise functions. AI is growing into a much more flexible and extensible tool, and by leveraging contextual AI, businesses can train their own custom AI models using business specific data sets to get even more value from their data and content.
David Jones is VP of Product Marketing for Content Services at Nuxeo, responsible for developing the global go-to-market strategy and execution plan for Nuxeo’s modern enterprise Content Services Platform. He has over 20 years’ experience in the emerging technologies space across multiple
industries including big data, analytics, cloud and enterprise content management. Prior to joining Nuxeo David was Vice President of European Operations with AIIM, was CEO and Founder of a Document Management startup for 8 years and has held Product and Marketing Management roles
with Konica Minolta and Hyland. David also holds a place on the AIIM Board of Directors, the non-profit industry association. David has a holistic understanding of the challenges of stakeholders from every facet of the organization and is passionate about delivering modern, future-forward technologies and solutions that truly make a difference to organizations, employees and customers.