This article explores how quantum thinking can inform a modern data analytics strategy, enabling teams to innovate safely without disrupting existing operations. Based on insights from David Draper, Data Science Manager at Iris Software Group, it offers practical guidance on modular system design, embedding innovation in delivery cycles, and improving AI explainability. Ideal for data leaders seeking resilient, forward-looking analytics strategies.
This article examines how unreliable data pipelines can trap data teams in endless maintenance work, draining strategic capacity. It explores practical solutions for building resilient, self-healing pipelines, allowing engineers to focus on delivering insights and driving business growth.
This article explores why data migrations, despite good intentions, often fail to deliver real value, and how using the right data migration tools strategically can change the outcome. Drawing on real-world insights, it offers practical guidance for data leaders looking to break the cycle of reactive migrations and build more resilient, scalable systems.
This article explores how data teams can adopt strategic practices from the crypto industry, particularly MoonPay’s approach under Director of Data Emily Loh, to move beyond reactive tasks and implement a proactive, value-driven data strategy. It covers resource allocation frameworks, AI implementation, and system design principles to help teams operate more effectively in fast-changing environments.
In today’s competitive landscape, a robust data strategy is essential. Data teams often struggle to evolve from reactive service providers to proactive strategic partners. Crypto data teams, facing rapidly shifting markets and strict regulatory environments, provide actionable lessons for all sectors.
In this article, you’ll discover how Emily Loh, Director of Data at MoonPay, applies advanced data strategy principles to turn challenges into opportunities:
Emily Loh leads a 15-person data team at MoonPay covering engineering, data science, and machine learning. Formerly of Coinbase, Loh brings an unconventional background in literature that enriches her team’s storytelling capabilities. “This is just storytelling,” she says. “It helps us focus on outcomes, not just outputs.”
MoonPay, the “Stripe of crypto,” processes irreversible transactions in real-time while navigating volatile regulatory environments. These conditions demand an agile and forward-thinking data strategy.
At MoonPay, Loh uses a structured resource allocation model:
This method creates protected space for long-term projects and strategic initiatives, reducing the risk of reactive overload.
Whether you adopt a 25/50/25 or 30/40/30 split, the key principle remains: intentionally allocate time to support strategic data strategy goals.
Many companies fall into the trap of implementing AI without purpose. Loh’s approach is more disciplined: AI must serve a clear business function.
“Less time on YAML files, more time on value,” says Loh. A focused AI strategy elevates your data team’s effectiveness.
Building systems for uncertain futures is core to effective data strategy. Crypto offers an extreme example, but lessons apply across AI, fintech, and e-commerce.
“We need laser focus on priorities,” says Loh. A future-ready data strategy requires both adaptability and foundational strength.
Crypto data teams thrive under pressure because they implement structured, flexible, and forward-looking data strategies. By:
…you can transition from a reactive support function to a strategic business partner.
Mid-level data leaders navigating operational and executive pressures will gain the most from these lessons. Whether in startups or large enterprises, these practices foster sustainable innovation.
Begin with a time audit and apply the 20/40/40 framework. Build modularity into your systems. Above all, maintain clarity on strategic priorities.
Learn More To hear the full conversation with Emily Loh and discover additional insights, listen to the complete Data Matas podcast episode.
A comprehensive analysis of how recurring data migration projects impact productivity and divert focus from strategic priorities. The article outlines practical methods to align migration efforts with measurable business outcomes, manage risk proactively, reduce unnecessary costs, and avoid vendor-imposed cycles. It offers expert perspectives on integrating migration planning into long-term infrastructure strategy to ensure continuity, scalability, and sustained business value.
An in-depth exploration of how organisations can scale their data pipeline architecture while maintaining strict control over cloud expenditure. The article provides actionable strategies to identify cost inefficiencies, transition from always-on data syncing to smart scheduling, renegotiate vendor contracts for flexibility, and integrate AI-driven automation into pipeline operations. It presents a structured roadmap to optimise infrastructure, enhance performance, and support agile growth without compromising budget discipline.
A scalable data pipeline architecture is essential for modern analytics and AI-driven operations. However, as businesses expand their data capabilities, cloud costs can escalate rapidly—often without clear visibility. Many organisations unknowingly overspend on inefficient data pipelines, redundant processing tasks, and restrictive vendor agreements.
In the Data Matas Season 2 premiere, Aaron Phethean engages AWS expert Jon Hammant to explore how organisations can scale data pipeline architecture while maintaining cost control. Their discussion outlines practical strategies for optimising infrastructure, enhancing scalability, and avoiding vendor lock-in.
This article distills those insights to help you improve your data pipeline architecture and reduce unnecessary expenses—ensuring growth does not come at the cost of inefficiency.
“AI is scaling faster than governance mechanisms,” notes Jon Hammant, AWS UK & Ireland Lead. “Without proactive cost control, businesses lose agility as infrastructure costs surge.”
Jon Hammant leads the UK & Ireland AWS Specialist Team, driving data pipeline architecture, AI, compute, and cloud infrastructure strategies. He has extensive experience in cloud optimisation, helping enterprises scale without uncontrolled cost growth.
“Cloud infrastructure is reshaping how businesses operate,” Jon explains. “Cost optimisation must be integrated into data architecture from the outset.”
Cloud infrastructure costs can grow unnoticed, especially when data pipelines are built without consideration for optimisation. Many businesses provision excess compute resources, rely on always-on synchronisation, and maintain outdated processing schedules.
“Real-time processing has become default, but that doesn’t mean it’s always necessary,” Jon warns. “Costs rise when data pipeline architecture isn’t right-sized.”
From reactive budgeting to proactive visibility.
A comprehensive audit reveals inefficiencies in existing data pipeline architecture. Many organisations underestimate the impact of idle compute resources, unused storage, and unnecessary data transfers.
Implementation Guidelines:
A regular audit can reduce cloud costs by 20–30%—savings that directly support business growth.
From default 24/7 data syncing to context-driven scheduling.
Not all data requires real-time processing. Businesses often maintain continuous synchronisation pipelines for workloads that could be run periodically. This significantly inflates infrastructure costs.
Implementation Guidelines:
This approach can reduce data pipeline architecture costs by 40–60%, without impacting business performance.
From rigid contracts to adaptable cost structures.
Multi-year contracts often lock organisations into pricing models that fail to reflect evolving needs. Flexible, usage-based pricing allows businesses to adjust infrastructure spend dynamically.
Implementation Guidelines:
A flexible pricing model can reduce cloud expenditure by 15–25% and improve budgeting accuracy.
From AI as a cost burden to AI as a cost optimiser.
AI-powered automation can significantly reduce the manual effort required to manage data pipeline architecture. By applying AI to capacity planning and anomaly detection, businesses enhance pipeline efficiency.
Implementation Guidelines:
AI-driven automation typically results in a 30–50% reduction in data management costs while improving time-to-insight.
A structured approach is essential to optimise data pipeline architecture effectively:
Phase 1: Conduct a full audit to identify cost-saving opportunities.
Phase 2: Implement smart scheduling to reduce processing inefficiencies.
Phase 3: Review and renegotiate vendor contracts for flexible pricing.
Phase 4: Deploy AI-based tools to automate and streamline operations.
“Sequencing matters,” Jon advises. “Visibility, then optimisation, followed by automation, it’s a continuous improvement cycle.”
Scalable data pipeline architecture is essential to long-term growth. Yet, without cost optimisation, infrastructure becomes a liability rather than an asset. Begin by auditing your environment, adopt scheduling strategies, embrace flexible contracts, and invest in AI-led automation.
Cloud cost optimisation isn’t just an IT priority—it’s a business strategy that ensures sustainable, agile growth.
Resources to Get Started
Let’s be honest—ETL transformations have a bad reputation. Talk to any data leader, and they’ll tell you the same thing: it’s too disruptive, too expensive, and too risky. They worry about downtime, getting locked into another overpriced vendor contract, and the strain on internal teams. That’s exactly why at Matatika, we’ve built an approach that eliminates these risks entirely—no downtime, no wasted spend, and no surprises.
Every data team wants to scale efficiently, reduce costs, and deliver real business value. But in practice, many struggle with siloed workflows, unreliable data, and costly inefficiencies. Since recording Season 1 of the Data Matas podcast, I've reflected on the key levers these great teams are using to deliver value in their businesses and pulled together the seven of the biggest lessons. These aren’t abstract theories—they are practical, tested strategies from professionals who have made data work for their organisations.
Data is as essential to manufacturing today as any raw material. Yet, while most manufacturers generate valuable data across their operations, fragmented and siloed systems often keep them from putting this information to effective use. Matatika’s ETL (Extract, Transform, Load) solution is designed specifically for manufacturing’s data challenges, enabling teams to unify, automate, and harness real-time insights across their operations.
SaaS ETL Tools pricing is broken. Too many businesses are stuck with platforms that charge by rows, gigabytes, or arbitrary metrics, pushing costs higher without delivering real value. It’s a model that inflates SaaS data costs, forcing companies to pay more for data that doesn’t always lead to better insights.