Earlier this year, my team was kicking off the discovery phase with a client for their new data architecture when their Head of Data & Analytics asked for my opinion about building “data products”. If you’ve been keeping up with the data world lately, you can probably guess that my ears perked up and immediately zeroed in on this being a question about a data mesh framework. The client I was working with had many questions about data mesh: What actually is data mesh? Would it even work in their organization? Was a data mesh the right choice for the future of their data program? As with any new (or new-ish) data trend, these are important questions to explore before shifting data strategies.
What is data mesh?
A data mesh architecture moves companies away from “centralized and monolithic” and towards a distributed data system where the domains are responsible for the stewardship of their own data pipelines. The theory behind the data mesh is that those closest to the data have ownership of it, from origination to aggregation, with support from the centralized data teams and IT department. The data mesh architecture was created by Zhamak Dehghani, a ThoughtWorks consultant who wrote the book on data mesh as a solution to what she considers the failures of most modern data platform initiatives.
The four pillars of a data mesh
There are four pillars in a data mesh: Decentralized-Domain Ownership, Data as a Product (DaaP), Self-Serve Data Platform, and Universal Governance Standards – what Dehghani calls “federated computational governance”.
Decentralized-domain ownership
Decentralized-domain ownership puts domain experts in charge of their data pipelines. In a typical centralized data architect, product and/or vertical teams are responsible for capturing their data and bringing it into the firm (APIs, apps, etc.), where the central data team picks it up, builds the pipelines to get it into enterprise data systems, works with the domain teams on transformations, and often helps to bring it into the BI tool. However, in a data mesh, the domain team never hands over the data engineering work to the enterprise team; instead, they own each step of the pipeline.
Data as a Product (DaaP)
Data as a product, the next pillar to data mesh, is a shift in perspective rather than a shift in organization. It tasks domain and central data teams to think of data and treat the data they generate as a product, with other product owners, analysts, and data scientists as the customers. Data has a lifecycle, and it benefits from product management and stewardship; data customers need their data to be discoverable, accessible, self-describing, trustworthy, secure, and interoperable.
Self-service data platform
Self-serve data platform is data mesh’s solution to what could otherwise be very specialized and siloed infrastructure. This pillar supports domain engineers in bringing their data to the market. Approved toolsets and processes are in place to, as Dehghani states, “solve[s] the need for duplicating the effort of setting up data pipeline engines, storage, and streaming infrastructure. A data infrastructure team can own and provide the necessary technology that the domains need to capture, process, store and serve their data products.” This is a standard that DAS42 has proclaimed from the rooftops for years. We partner with many of the best data platforms offered, and believe in standardizing your data platform tools to increase efficiency and reduce cognitive load in simply getting data available.
Universal governance standards
Finally, the universal standards for data and platform governance help engineers and customers make sense of the mesh. All domains’ pipelines must follow the same set of rules, standards, and SLAs to allow for interoperability and universal understanding. This federation layer includes components like quality standards, metadata, and discoverability. This pillar ensures that data from across the business can be stitched together for rich insights: how user data from all products and verticals converge to create a Customer360, how product can learn from marketing (and vice versa), or how finance can get a clear picture based on all revenue streams.
The data technology needs to accomplish:
- Reduced cognitive load
- End-to-end ownership of data requirements
- Independent domains: marketing can’t change something while product is building
- Common language support (commonly, SQL and Python)
- Easy sharing of data artifacts
What needs does data mesh fill?
While a data mesh is meant to fill the gaps and remediate the faults of a more traditional centralized data architecture, it also comes with its own unique benefits. First, it forces companies to change the way they think about their data: both data as a product and as data products. Instead of the “build it and forget it” framework that many data engineering teams fall in the pattern of, the data mesh strategy suggests that data and their pipelines benefit from a lifecycle, that they evolve and grow, new opportunities arise, and that building a data product for customers leads to better engineering and user experience. Additionally, with domain-driven ownership, a data mesh is meant to remove the bottlenecks of funneling the heavy data engineering to one single team. In theory, it allows data to be brought to operability more quickly, better meeting the ever-growing demand for “real time” data.
One of my observations about a data mesh strategy is that it allows for greater autonomy by teams over their data, which can lead to greater ownership and stewardship, in turn encouraging greater adoption of the data. When data engineers are fully embedded with the domain team, they are part of that team’s mission and successes; they have ownership not only over the data and pipelines, but also have direct impact on the success of the domain by helping their team make better informed decisions. This helps create better data products, which are typically easier to use, more insightful, and more accessible.
Is the data mesh right for you?
In short: it depends. Yes, yes, I know this is the most “Consultant” answer. But before I would recommend a client – or you – to jump on the data mesh train, I encourage data managers and architects to consider what their needs are, and what compromises they are willing to make.
Size and scale of company
One of the main reasons I would recommend for or against a data mesh is the company’s size and its projected short-term growth. Consider where on the data maturity curve a company sits and whether their most basic data needs have been satisfied. For smaller companies, a data mesh architecture may be more work than the data team is able to handle – or, that the product teams are equipped to use correctly. Their most immediate concerns may be defining KPIs, breaking down raw data silos, stabilizing their existing pipelines, or migrating from on-prem to the cloud. Introducing a data mesh architecture too early may be asking the data engineering teams to bite off more than they can chew, introduce data confusion for leadership tasked with scaling a company, or over-scale the data engineering teams too early in the maturity cycle.
However, there is a tipping point on company size – primarily, product offerings – where a data mesh becomes more feasible. My fellow Principal Consultant, Jeff Springer gave a great example:
“Medium to enterprise-scale businesses should consider implementing a data mesh framework if they have or expect to have several different internal data pipelines. Consider a client with hundreds of products, promotions, or user interaction touchpoints. Each of those products collects user data which could be mined to improve the product, train their users, and provide the company with innumerable avenues for deeper analytics. However, because this data is treated as just a by-product, it is rarely made available for analysis.”
Company culture
For a data mesh to be effective, the company culture must foster a high sense of ownership over work, as well as promote open communication. Practically, one main criticism I’ve heard from clients is that data mesh is frustrating because teams feel like they’re often duplicating work. With a strong federation layer, the technical risk of this should largely be mitigated. However, the real risk isn’t actually duplicating pipelines, but the frustration that comes with a breakdown in communication. The left hand needs to know what the right hand is doing. At its core, I maintain data mesh is primarily a people and organizational architecture; the technical tools and processes are ones that could be solid recommendations for any data architecture. But, coordinating people and perspectives is a lot more difficult than coordinating data. With a centralized data team building and maintaining pipelines in partnership with domain teams, much of the conversations happen through the central data team. They act not only as the builders, but also as the facilitators and moderators. For a data mesh, however, domain teams need to be owners in the cross-domain communication.
One of the most significant blockers to scaling data needs that I see for clients is hiring data teams. What is missing is not a quantity/quality of data, but people that can identify the data, transform it, and interpret what it means. The success of a data mesh hinges on data engineers operating as part of the domains, but this can become a challenge if companies are unable to hire data engineers across every domain. While having a centralized data team can certainly silo the skillset, it also makes more efficient use of those skills.
Principal consultant’s perspective
I’ve heard a data mesh strategy referred to as “a resignation to reality.” Teams are moving quickly, data is moving even faster, and the ability to operationalize data needs to meet this acceleration. Especially at large enterprise companies, teams need to utilize different tools, technologies, and metrics to meet their specific needs. The argument is that a single centralized team can’t manage all of those, at least not with the speed and accuracy the business demands. But, maybe at the root of this is the fact that I’m simply not ready to resign and do away with a centralized data platform. Why? Because I’ve seen centralized data platforms work, at scale, and for companies in industries ranging from B2B to Media & Entertainment to Retail. I don’t see a centralized data structure as contrary to agile, accurate data-driven decision making.
One of my reservations about data mesh is if it is moving the skillset silos from a central data team to knowledge silos of the domain team. If each team owns their respective data product, it runs the risk of losing the security of having a larger team of engineers and analysts supporting the pipelines. Furthermore, many of the bastions that are required to make a data mesh architecture reliable depends on the many data teams consistently adhering to the federation standards. As anyone who has ever worked with a technical team can tell you, having a system hinge on reliable, up-to-date documentation is risky business.
My favored approach is a hybrid, partially decentralized data architecture where a central data team focuses on infrastructure, enterprise data tools, and maintaining reliable core data and KPIs, while domain experts work with the centralized data team. I’ve heard many names for this design, but I prefer to call it a “hub and spoke” architecture. In this scenario, domain teams are more reliant on central data resources, but in exchange the central data team is able to provide pipeline and cross-functional support while product owners contribute their domain data expertise. It allows the teams to focus on what they know best, providing a rich, stable, and consistent data platform. For example, one of my clients utilized a hub and spoke architecture to run real-time analytics and decision making for user streaming data – the exact business example that Dehghani uses in her initial paper to campaign for a data mesh architecture.
Finally, I am hesitant to recommend a data mesh without a healthy dose of skepticism. One of the reasons I love working in the data industry is that the people are creative and passionate, always finding innovative ways to solve their business and data challenges. However, these new tools and frameworks should not put at risk the foundational principles that I think make for a reliable, durable, and accurate data architecture. For data mesh, that means my belief in data Single Source of Truth (SSOT). A core need of clients is to help their leadership determine KPIs and metrics, align teams across the organization, untangle the pandemonium from liberal data distribution, and to empower teams to use their proliferated data correctly. A SSOT design isn’t a monolith, and doesn’t advocate for “one Data Team to Rule Them All,” but instead for an end to siloed data, scaled data teams, and the ability for teams across the business to use data to make agile decisions. Only so far as data mesh maintains a SSOT for data and metrics through the universal standards of governance do I see a data mesh being a solid reference architecture.