Experimentation is a critical driver of learning and innovation in data and analytics programs, enabling organizations to unlock the full value of their investments. Once you have established a clear vision for your data-driven organization’s future state and defined the necessary organizational capabilities and strategy, the next crucial phase involves designing and implementing your data infrastructure and products. To achieve this, you need an architectural blueprint that reflects your approach to moving, storing, integrating, and consuming data.
Designing a robust data and analytics architecture requires adopting a learning mindset that distinguishes between ‘what is known’ from ‘what we think or what we have been told’. By conducting short experiments with clear goals, you validate hypotheses and sift through untested assumptions. Unfortunately, many data programs fail because they are built on untested theories and noise like houses built on sand rather than solid roc
So, how should you plan your experiments and how do they relate to your ability to innovate?
Experiments help separate signal from noise. Innovation, in this context, is prioritizing experiments based on their value from others that can be delayed. Naturally, there are cost and resource considerations for experimentation. It needs infrastructure without stifling policies that apply to enterprise systems. I have gained valuable insights into innovation and experimentation from my interactions with Vishal Sikka, the former CEO of Infosys and ex-CTO of SAP.
Consider three key dimensions:
1. Distance to value: Measures the usefulness to the business in terms of uncertainty or risk in realizing business value. The governance team should define metrics to assess how key business processes are impacted.
2. Distance to execute: Measures the complexity and risk associated with implementing the architectural plan given your team, skills, and requirements.
3. Distance to the user: Measures the effectiveness of the user experience and user adoption.
Experiments provide clarity on the complexity, effort, and value associated with technical decisions. They also enhance the team’s understanding of technology and align plans with reality. Additionally, experiments test the team’s resilience, adaptability, and willingness to accept failure. Leadership plays a crucial role in making tough decisions during this process.
This post represents the second instalment in my series, where I distil my experience to address the key issues impeding the success of data and analytics programs. In upcoming posts, we will explore common pitfalls to avoid and discuss strategies for scaling up from experiments to production.
John Kuriakos