Is Bad Data Polluting IT Processes and Decisions?

Len FischerThis guest blog was contributed by Lennard Fischer | 4/2/14

Technology is at the heart of many business processes and workflows – it’s often a strategic investment or competitive differentiator. Yet the processes by which businesses acquire and manage technology are themselves at risk. 

Technology is changing rapidly, and enterprises find themselves handling both the new and the old technology at the same time. A few years ago, IT procurement teams didn’t have to worry about mobile apps or licenses deployed on virtual servers in cloud infrastructure. With the introduction of cloud computing, mobility and BYOD, the technology purchasing environment is getting more fragmented and complex. The symptoms of these stresses are everywhere:

  • “Shadow IT” – purchases that bypass traditional procurement systems and due diligence
  • Time-consuming vendor audits, with confusion over entitlements and licensing
  • Expired maintenance contracts that are only discovered when support is needed
  • Data center consolidation, energy usage reduction, or virtualization initiatives delayed by a lack of data

All of these problems come down to data – missing, incomplete, out-of-date and inaccurate data about technology purchases and entitlements.

Finding the Source of the Data Problem

The data that drives procurement processes originates in the Purchase Order (PO). Like the source of a river, any corruption in the PO spreads rapidly downstream to all of the systems and processes that rely on this data. As we’ve already shown in previous blogs, even the best-managed companies have problems with their PO data. (See our earlier Workflow blog post – Procurement Data by the Numbers).

This might seem like a process problem – fix the PO or purchasing processes and the data will improve. But many data problems are actually caused by external agents – the technology vendors themselves. 

  • Resellers use different part numbers, descriptions and bundles for technology than the original manufacturers. If you purchase from both, your data will be inconsistent.
  • Technology vendors bundle solutions and acquire and re-brand other vendor’s solutions – making existing purchase data obsolete.
  • Licensing models change over time, making it difficult to determine actual entitlements.
  • Vendors set “end-of-life” dates on products after the fact.

Addressing the Data Quality Problems
How can technology procurement teams address these challenges? The answer lies in fixing data quality at its source. Find ways to transform incomplete and inconsistent PO data into clean, consistent data to drive IT decisions. 

Look for missing data in text fields. Start by recognizing that your PO data may be flawed and analyze it more deeply. For example, quantities and support information are often hidden in unstructured description fields. Direct your Extract-Transform-Load (ETL) processes to analyze both structured and unstructured fields to find essential insight.  

Normalize the PO data. The next step is to normalize the different product names and categories to a consistent taxonomy so that both purchasing and IT teams can make sense of the data you have. You can use a shared taxonomy such as BDNA Technopedia, or create your own to fit specific business needs.

Complete and update data with external intelligence. Your internal PO data isn’t enough to support downstream decisions – the technology vendors themselves contribute vital information about product name changes, acquisitions and support status. Look for ways to both update and complete your internal purchasing data with this vendor intelligence.

Share the data source with other enterprise processes and groups. Use this newly cleaned, updated and complete data to feed the various systems that rely on purchasing data as a data source. 

Because procurement data feeds many critical enterprise decisions and processes, a strategic investment in cleaning the data at the source can have a broad business impact. From procurement to IT to security and strategic planning, clean IT data helps drive better business decisions.

Lennard Fischer

Len Fischer is the executive vice president of marketing responsible for BDNA’s corporate brand, field marketing, corporate communications, web properties, and integrated marketing efforts. Most recently, Len held key leadership roles in Marketing and Sales at Informatica and played an instrumental role in their growth from an early technology pioneer to the number one independent provider of data integration software.Len holds a BS from Northern Illinois University and an MBA from Pepperdine University. On a personal note, Len enjoys spending time with family. He loves playing golf and chess, as well as following bay area sports.