Senior Technical PM.
Data infrastructure ยท ML ยท Analytics.
I build the infrastructure modern AI runs on โ and I stay accountable to the revenue and adoption numbers on the other side.
I'm a Technical Product Manager with a CS degree and a builder's instinct โ which means I don't just work alongside data and engineering teams, I speak their language. I've owned roadmaps for warehouse re-architecture, streaming pipelines, semantic layers, data governance, and ML deployment infrastructure. When I write a spec, I've already stress-tested the technical assumptions behind it.
My career spans founding PM at a pre-product SaaS startup, 10+ M&A integrations across $80M+ ARR at a global software company, and currently leading data platform strategy at Mavrck โ operating at the intersection of Analytics & Reporting Engineering, Data Engineering, and Internal Data Analysts. Every context has been different. The rigour has been the same.
Later operates across three distinct product lines โ Later Influence, Later Social, and Mavely โ each powered by a growing suite of ML models covering content performance, creator analytics, affiliate insights, and more. As the platform scaled, so did the invisible cost of building AI without infrastructure discipline.
Every product team was integrating directly with AWS SageMaker endpoints independently. Each team wrote their own retrieval logic, their own transformation code, their own schema handling. It worked โ until it didn't.
I didn't inherit a mandate to fix this. I identified it myself by watching how engineering teams were spending time, tracing duplicated patterns across codebases, and mapping AWS spend against actual model usage patterns. What emerged was a picture of a system accumulating structural debt quietly but rapidly across six distinct failure modes:
The technical problem was clear to me early. The harder challenge was making engineering leadership and company leadership feel the urgency of a problem that hadn't visibly broken anything yet.
Platform work is notoriously hard to prioritise. There's no screaming customer ticket. No dashboard going red. The costs are diffuse โ spread across teams, absorbed into quarterly spend, invisible in any single place. I had to make the invisible visible.
My approach was to reframe the problem from a cost/efficiency fix into a strategic architecture decision โ one with a much larger vision behind it. The Insights Service wasn't just about cleaning up SageMaker integrations. It was the first layer of what would become Later's Insights Platform: a centralised, standardised engine powering all current and future products with consistent, high-quality, ML-enriched data.
That longer-term vision โ a single platform service serving creators, content, affiliates, and more across all product lines โ is what got leadership aligned. Once I could show where this was going architecturally, the near-term investment became obvious.
A centralised Insights Service โ a stable, versioned API layer sitting between all Later products and the underlying SageMaker model endpoints. The service owns the entire ML lifecycle end to end: input ingestion, transformation, SageMaker invocation, prediction retrieval, and storage.
The Insights Service is in production and expanding fast. More than half a dozen ML models are already available platform-wide โ with more in the pipeline.
Any downstream consumer โ whether it's a customer-facing analytics dashboard, an internal tool, or a product feature โ does exactly one thing: hits the Insights Service endpoint and gets what it needs. No team touches model inputs. No team manages SageMaker. No team waits on another team.
The architecture that didn't exist a year ago is now the backbone every ML capability in the Later ecosystem runs on.
I enjoy conversations with founders, operators, and product leaders working on hard data and infrastructure problems. If something here resonated โ or you're navigating a data platform, analytics, or ML productization challenge โ reach out.