Isaac Sacolick
Contributing writer

7 steps to improve analytics for data-driven organizations

analysis
01 Jul 202410 mins
Analytics

Effective data-driven decision-making requires good tools, high-quality data, efficient processes, and prepared people. Here’s how to achieve it.

Two people shake hands over a laptop.
Credit: CCO

When leaders say they want to be a data-driven organization, a key objective is empowering business people to use data, predictive models, generative AI capabilities, and data visualizations to improve decision-making.

Leaders seek smarter decisions that yield positive business benefits, faster decision-making to respond to opportunities, safer decisions that minimize risks, and change management disciplines to grow the number of employees using analytics tools across the organization. They also seek scalable solutions using the latest machine learning models, AI capabilities, and new data assets, ensuring that data is compliant, protected, and secure.

“To out-compete, you must out-innovate your competitors, which relies on making quick and effective decisions,” says Wayne Jackson, CEO of Sonatype. “Leaders need a full picture to make informed decisions, and gaining that level of visibility requires comprehensive data. But data alone won’t improve or accelerate the process, and you must be able to make sense of that data.”

While many organizations have invested in data architectures, deployed analytics tools, built machine learning models, and rolled out data visualization capabilities, end-user adoption may lag, and business impacts may be disappointing. For example, The State of Data Science and Machine Learning reports that 45% of organizations deploy less than 25% of their machine learning models to production.

This article looks at seven steps to help address gaps between just deploying analytics versus end-user adoption of analytics for decision-making. The first four steps focus on how individual teams, departments, and businesses can improve their analytics development process, while the last three are about scaling them across larger businesses and enterprises.

Understand end-users and their decision flows

Conducting some upfront discovery work around a new data set or an analytics domain is important. But it’s easy to take these efforts too far and deploy proof of concepts into production, leaving out key steps in defining the end-user personas, reviewing their workflows, and discussing the decisions and actions where analytics are needed. 

“Historically, the way analytics has been developed was to start with well-organized data, slap a bunch of well-thought-out algorithms to it, review what the data confesses, and expose recommendations in the form of visuals,” says Soumendra Mohanty, chief strategy officer at Tredence. “This approach misses capturing input from the end user who will make decisions in their daily activity, whether it’s an inventory manager, a campaign director, or a factory warehouse foreperson, and is looking for real-time recommendations and directives on an hourly basis to put them into action.”

Here are several questions to consider asking end-users.

  • How, when, and how frequently are end-users and managers making key decisions today?
  • What’s the impact of a wrong or a slow decision versus the value of making faster and more accurate decisions?
  • What data and information do they use for making decisions, and what steps are they taking to access it?
  • What tools are they using to take action on their decisions?

The key is understanding how analytics fits into workflows, what integrations to consider, and where automation is possible.

Define data quality requirements and remediations

Of course, many end-users won’t be able to distinguish statistical analytics, machine learning, and genAI solutions, but they can easily see when the data is wrong or solutions produce erroneous recommendations. Improving data quality is an iterative process, but if not addressed early enough in the development process, end-users will lose trust and return to how they previously worked.

“Ready-to-use, high-quality business data is essential for ensuring accurate enterprise analytics and leveraging the benefits of genAI,” says Irfan Khan, president and chief product officer at SAP HANA database and Analytics. “Only with a strong data foundation and a unified view of data across their complex landscapes are businesses empowered to facilitate fully digitalized business processes and seamless data exchange across their enterprise. Without clean business data, most AI-derived information cannot be trusted or effectively used.”

Top organizations for agile data science teams take on data integration and quality requirements to deliver analytics capabilities. They’ll define data quality metrics as non-functional requirements, publish improvement efforts, and update stakeholders as metrics improve.

Accelerate time to data and decisions

Beyond data quality, teams should focus on two other analytics metrics related to speed. Time-to-data accounts for the delays in receiving and processing data, while time-to-decision accounts for the human factors, usability, integration, and level of automation going from when data is available to when end-users make decisions. 

“Time-to-data used to be the privilege of high-frequency trading platforms years ago,” says Nikolaos Vasiloglou, VP of research ML at RelationalAI. “Now anyone can access cheap, infinite storage, computing, and software tools to consume data in real-time.”

While more organizations can acquire scalable infrastructure, optimizing data management and developing robust data pipelines requires architecture planning and design. One way to avoid pitfalls is to start with smaller-scoped analytics objectives and validate the architecture’s performance while scaling usage, data, and capabilities.

Implement data protection early

That rush to prototype analytics solutions and ensure low-latency data pipelines can come at significant risk and cost if regulated data is compromised. It’s often more cost-effective to address the required data protections in data pipelines and data management platforms than to implement them in analytics solutions.

“All regulated data should be cryptographically protected (encrypted, masked, or tokenized) early in the data pipeline when the data is created or captured, says Ameesh Divatia, CEO and co-founder at Baffle. “Once this is done, downstream data usage for all use cases, including genAI, could go much faster since no additional data discovery or review is necessary before using that data.”

Implementing data protection early in the process also creates the opportunity to engage end-users and stakeholders on data security best practices.

Scale data governance programs

The steps I covered thus far can help improve analytics implementations and decision-making for individual use cases. Scaling analytics-driven decision-making to multiple businesses, departments, or domains requires evolving an analytics operating model and establishing data governance policies and practices.

Felix Van de Maele, CEO of Collibra, shared with me how even very large enterprises can establish data governance practices quickly. “Data governance is the foundation for unlocking the true potential of AI,” he says. “McDonald’s, one of the world’s most recognizable brands, established a trusted data foundation in just 60 days with over 570 users across 21 countries already on board. These advancements have transformed how McDonald’s uses data, leading to greater transparency, trust, and speed for their global business users.”

A key data governance tool for scaling data-driven organizations is the data catalog, which helps implement access policies, configure authorizations, enable discovery, and maintain data dictionaries. Top data catalog and quality vendors include Alation, Collibra, Informatica, Google, Hitachi Vantara, IBM, Microsoft, Oracle, Precisely, SAP, SAS, and Talend.  

“Data catalogs that provide robust data governance and proactive quality monitoring drive confident business decisions,” says Emily Washington, SVP of product management at Precisely. “Given the heightened risks of ungoverned or inaccurate data in the AI era, prioritizing data catalogs that empower users with a comprehensive understanding of their data and its underlying health will enable them to harness data effectively, driving revenue and increased profits through confident reliance on business decisions derived from AI and advanced analytics.”

Gartner recently reported that 78% of chief data and analytics officers (CDAOs) are evolving their operating models to support innovation better, and 61% said that market disruptions, including ChatGPT, were a driver. One critical aspect of evolving the operating model is accelerating proactive data governance practices such as creating data catalogs, centralizing data resources, and improving data quality.

Establish and improve implementation standards

Creating implementation standards sometimes falls under data governance, but the tools, development lifecycle, testing, deployment requirements, documentation, and usability standards cover a broader set of disciplines.

Data-driven organizations create and evolve standards so that data science teams focus on the end user and deliver benefits. A standards playbook helps accelerate delivery, scale best practices, and establish deployment requirements.   

Marty Andolino, VP of engineering at Capital One, shares these recommendations regarding creating data standards and their benefits. “Data standards, such as metadata, quality, formats, SLAs, and observability, ensure integrity, ease of use, and security throughout the data lifecycle. Embedding these standards into unified, self-service experiences empowers users to trust and use data as it is shared across the enterprise.”

Another best practice for smarter data visualizations is to define a style guide covering layouts, chart types, color schemes, naming conventions, and other usability considerations. Dashboards may be underutilized when they’re too slow, not oriented to solve specific problems, or where multiple dashboards lack usability standards.

Another consideration is how analytics tools, dashboards, and ML models get tested. Giovanni Lanzani, managing director at Xebia Data, recommends that data teams “start testing data from the source through all the transformations that ultimately generate the insights the business relies on, catching issues as they arise instead of serving incorrect insights to business users.”

Larger enterprises with large-scale operational, analytical, and unstructured data sets should also define data management and architecture standards. Aislinn Wright, VP of product management at EDB, says, “Organizations should adopt a data platform that unifies transactional, analytical, and AI data and ones that implement open and portable standards for deploying new analytics and data science projects rapidly.”

Another key objective should be to simplify how authorized end users can access and discover enterprise data. “If data lives in dozens of systems and there are no standards and patterns for facilitating the quick accessibility and utilization of data, any effort to take action with that data is going to be grueling,” says Krishna Sudhakar, director of partner advisory at Pricefx.

Daniel Fallmann, CEO of  Mindbreeze, shares an approach to simplifying data access and discovery. “Business people can simplify the process of finding relevant data sources by implementing semantic (graph) indices and intelligent and highly automated metadata management, enabling easy discovery and understanding of internal and external datasets.

Promote a data-driven culture

Technology capabilities, data governance, and analytics practice standards are the building blocks, but digital trailblazers must evolve the culture to truly transform into data-driven organizations. Transformation also must be ongoing because genAI, real-time analytics, and other emerging technologies are providing greater capabilities to augment human intelligence with smarter, faster, and safer decision-making capabilities.  

A culture starting point is to improve communication and collaboration across the organization. “Companies need to focus on breaking down silos between business units, functions, and technologies that hinder information sharing and informed decision-making,” says John Castleman, CEO of Bridgenext. “All too often, these internal constructs stand in the way of achieving operational efficiency, revenue growth, and innovation.”

An easy win is to schedule frequent, company-wide demonstrations of new and upgraded analytics capabilities, the types of decisions being made with them, the business impacts, and how end-users celebrate their success. While there may be some initial fears about using new tools and analytics for decision-making, successful and happy end-users help promote the benefits of adoption. 

Adopting analytics capabilities can lead to competitive business benefits and culture change. Start with the end-user in mind, build trust in the data and capabilities, evolve data governance, and improve implementation standards to drive the transformation.

Isaac Sacolick
Contributing writer

Isaac Sacolick, President of StarCIO, a digital transformation learning company, guides leaders on adopting the practices needed to lead transformational change in their organizations. He is the author of Digital Trailblazer and the Amazon bestseller Driving Digital and speaks about agile planning, devops, data science, product management, and other digital transformation best practices. Sacolick is a recognized top social CIO, a digital transformation influencer, and has over 900 articles published at InfoWorld, CIO.com, his blog Social, Agile, and Transformation, and other sites.

The opinions expressed in this blog are those of Isaac Sacolick and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.

More from this author

Exit mobile version