Urvash
Chheda

Senior Technical PM.

Data infrastructure ยท ML ยท Analytics.

I build the infrastructure modern AI runs on โ€” and I stay accountable to the revenue and adoption numbers on the other side.

Urvash Chheda
Data
01

3 things I'm known for

01
Data infrastructure that powers AI at scale
Designing the warehouse architecture, semantic layers, and ML pipelines that give AI products a foundation to stand on.
02
Cross-functional data product ownership
Bridging Data Engineering, Analytics Engineering, and business stakeholders โ€” from 0-to-1 launches to enterprise-scale transformation.
03
API-first analytics that generate revenue
Designing and shipping analytics products that create new revenue streams, improve adoption, and deliver measurable ROI for the business.
02

Currently thinking about

๐Ÿšง
In progress Check back soon โ€” currently updating this section with what's on my mind.
03

Philosophy

How I think

Data products are only as good as the decisions they enable.

I'm a Technical Product Manager with a CS degree and a builder's instinct โ€” which means I don't just work alongside data and engineering teams, I speak their language. I've owned roadmaps for warehouse re-architecture, streaming pipelines, semantic layers, data governance, and ML deployment infrastructure. When I write a spec, I've already stress-tested the technical assumptions behind it.

My career spans founding PM at a pre-product SaaS startup, 10+ M&A integrations across $80M+ ARR at a global software company, and currently leading data platform strategy at Mavrck โ€” operating at the intersection of Analytics & Reporting Engineering, Data Engineering, and Internal Data Analysts. Every context has been different. The rigour has been the same.

04

Working with me

๐Ÿšง In progress
05

Selected Work

01 / 03
Mavrck ยท Later
Centralising ML deployment across a fragmented data platform
ML Platforms Data Infra 0โ†’1
The problem
ML models were being deployed inconsistently across three teams โ€” no shared infrastructure, no versioning, no observability. Every team reinvented the wheel.
What I decided
Designed a centralised ML deployment layer โ€” standardised serving infrastructure, model registry, and monitoring โ€” owned by Data Engineering but usable by all three teams.
1 qtr
Fastest ever shipped
ยฝ doz+
ML models live
โ†“
Cloud spend
Read full case study โ†“
โŸก
Full case study available on request
This case study contains proprietary architectural details. Request access and I'll send you the password directly.
The context

Later operates across three distinct product lines โ€” Later Influence, Later Social, and Mavely โ€” each powered by a growing suite of ML models covering content performance, creator analytics, affiliate insights, and more. As the platform scaled, so did the invisible cost of building AI without infrastructure discipline.

Every product team was integrating directly with AWS SageMaker endpoints independently. Each team wrote their own retrieval logic, their own transformation code, their own schema handling. It worked โ€” until it didn't.

What I found

I didn't inherit a mandate to fix this. I identified it myself by watching how engineering teams were spending time, tracing duplicated patterns across codebases, and mapping AWS spend against actual model usage patterns. What emerged was a picture of a system accumulating structural debt quietly but rapidly across six distinct failure modes:

Decentralised model access โ€” SageMaker endpoints accessed ad hoc by multiple services. Fragmented integrations, redundant logic, inconsistent behaviour across the platform.
Inflated infrastructure costs โ€” models invoked independently across services with no caching, no throttling, no request consolidation. Every team paying full compute cost for the same inference result.
Tight coupling โ€” each product team integrated directly with Data Science Team outputs. Any model change required direct coordination with multiple teams, slowing every iteration.
Zero observability โ€” no visibility into model latency, failure rates, or usage patterns. No way to make data-driven decisions about scaling, retraining, or deprecating models.
Duplicated effort โ€” teams independently implementing identical logic to retrieve, transform, and surface derived insights. Same bugs written three times. Same compute cost paid three times.
A scalability ceiling โ€” an architecture not designed to grow. As Later expanded its product surface and model count, the cost and complexity would grow exponentially.
The hard part โ€” selling the long-term vision

The technical problem was clear to me early. The harder challenge was making engineering leadership and company leadership feel the urgency of a problem that hadn't visibly broken anything yet.

Platform work is notoriously hard to prioritise. There's no screaming customer ticket. No dashboard going red. The costs are diffuse โ€” spread across teams, absorbed into quarterly spend, invisible in any single place. I had to make the invisible visible.

My approach was to reframe the problem from a cost/efficiency fix into a strategic architecture decision โ€” one with a much larger vision behind it. The Insights Service wasn't just about cleaning up SageMaker integrations. It was the first layer of what would become Later's Insights Platform: a centralised, standardised engine powering all current and future products with consistent, high-quality, ML-enriched data.

That longer-term vision โ€” a single platform service serving creators, content, affiliates, and more across all product lines โ€” is what got leadership aligned. Once I could show where this was going architecturally, the near-term investment became obvious.

In hindsight I would have invested more in stakeholder alignment earlier. The vision was right, but some of the cross-team coordination that happened mid-execution could have happened before the first sprint. That's the thing I'd change.
What we built

A centralised Insights Service โ€” a stable, versioned API layer sitting between all Later products and the underlying SageMaker model endpoints. The service owns the entire ML lifecycle end to end: input ingestion, transformation, SageMaker invocation, prediction retrieval, and storage.

Single versioned API โ€” consuming teams no longer know or care what's happening underneath
Full lifecycle ownership โ€” input ingestion, transformation, endpoint invocation, prediction storage. Downstream consumers do one thing: hit the endpoint.
Consolidated invocation โ€” caching, request deduplication, and throttling across all services
End-to-end observability โ€” centralised logging, latency tracking, failure rates, SLA enforcement
Uniform governance โ€” rate limiting, access control, schema versioning in one place
What this unlocked

The Insights Service is in production and expanding fast. More than half a dozen ML models are already available platform-wide โ€” with more in the pipeline.

Any downstream consumer โ€” whether it's a customer-facing analytics dashboard, an internal tool, or a product feature โ€” does exactly one thing: hits the Insights Service endpoint and gets what it needs. No team touches model inputs. No team manages SageMaker. No team waits on another team.

The architecture that didn't exist a year ago is now the backbone every ML capability in the Later ecosystem runs on.

RoleSenior PM โ€” when I shipped this I owned Analytics; delivering it earned me Data Engineering ownership too
TimelineUnder 1 quarter โ€” fastest platform service deployment in company history
Team1 engineering team + cross-functional stakeholders across multiple product and engineering teams
02 / 03
Aptean
10+ M&A data integrations across $80M+ ARR
M&A Warehouse Enterprise
The problem
Each acquisition brought a different data model, pipeline, and BI stack. Leadership needed consolidated reporting across all entities within weeks of close.
What I decided
Built a repeatable integration playbook โ€” standardised data model, ingestion templates, and automated reconciliation โ€” reducing time-to-reporting from months to weeks.
0+
Integrations shipped
$0M+
ARR consolidated
โ†“0%
Time to reporting
03 / 03
Mavrck ยท Later
API-first analytics that opened a new revenue stream
API Design Analytics Revenue
The problem
Enterprise customers were exporting data manually to build their own reports. Churn risk was high and analytics adoption was flat โ€” the product wasn't sticky enough.
What I decided
Designed an API-first analytics layer โ€” programmable data access for enterprise customers โ€” turning a support burden into a monetisable product surface.
New
Revenue stream
โ†‘
Enterprise retention
โ†“
Support tickets
06

Experience

Feb 2023 โ€“
Present
Senior Product Manager
Mavrck, LLC (rebranded to Later)
Leading data product strategy across three teams โ€” Analytics & Reporting Engineering, Data Engineering, and Internal Data Analysts โ€” owning the full lifecycle from infrastructure to insight. Driving enterprise-wide data platform transformation, ML deployment centralization, and revenue-generating analytics products.
ML Platforms Meta APIs TikTok APIs Data Governance Warehouse Architecture Streaming Semantic Layer KPI Dashboards GDPR/CCPA AWS
โ†—
Jun 2021 โ€“
Dec 2022
Product Manager, Data Analytics
Bushel, Inc.
Defined and launched a new analytics platform from zero. Led enterprise data warehouse migration to GCP, established data strategy, and drove standardized metrics across the product suite.
GCP Data Warehouse ETL 0-to-1 AgTech AWS
โ†—
Feb 2020 โ€“
Jun 2021
Associate PM, M&A Integrations
Aptean, Inc.
Managed software data integrations for 10+ acquisitions across 30+ product lines, driving $80M+ ARR. Built scalable acquisition intelligence using data models, Salesforce APIs, ETL pipelines, and analytics tooling.
M&A Integration Salesforce APIs ETL Pipelines Data Modeling Global SaaS
โ†—
Jun 2016 โ€“
Jul 2018
Founding Product Manager
MyCrop Technologies Pvt. Ltd.
First product hire at an early-stage agtech startup. Owned end-to-end product strategy from zero โ€” roadmap definition, MVP delivery, global launch, and platform scaling.
0-to-1 Founding PM SaaS Global Launch
โ†—
07

Technical Stack

01 ยท Ingestion
How data enters the system
ETL Pipelines
Streaming Infra
Kafka
Fivetran
Airbyte
REST / Webhook Ingestion
Salesforce APIs
02 ยท Warehouse
Storage & transformation
BigQuery
dbt
SQL
Snowflake
AWS Redshift
Python
JavaScript (ES6)
03 ยท ML & AI
Models & deployment
AWS SageMaker
ML Deployment
Model Registry
Feature Stores
Observability
A/B Testing
Inference APIs
04 ยท Serving
APIs & data access
REST APIs
Semantic Layer
API Gateway
GraphQL
GCP
AWS
DataDog
05 ยท BI & Analytics
Insight delivery
Amplitude
Tableau
Pendo
Sisense
Mode
Omni
PowerBI
Redash
Google Analytics
06 ยท AI Tools
Daily PM workflows
Claude
ChatGPT
Cursor
Perplexity
Gemini
GitHub Copilot
Amplitude AI
Notion AI
Dovetail
07 ยท Workflow
Delivery & collaboration
JIRA
Linear
Notion
Figma
Lucidchart
Smartsheet
08

Expertise

โฌก
Data Infrastructure & Architecture
Hands-on experience driving warehouse re-architecture, semantic layers, streaming pipelines, and data governance frameworks โ€” from strategy definition through production delivery.
โ—ˆ
ML Platform Productization
Centralized ML deployments and built the infrastructure layer for platform-wide model integration โ€” reducing cloud spend and accelerating model delivery cycles across teams.
โฌ™
API-First Analytics Products
Built revenue-generating analytics products on Meta, TikTok, and internal APIs โ€” including paid performance, earned media value, reporting infrastructure, and monitoring.
โŠ•
0-to-1 Data Product Launches
Taken multiple data products from blank canvas to shipped MVP โ€” defining strategy, owning discovery, leading cross-functional technical teams, and validating adoption at scale.
โ—ซ
Data Governance & Compliance
Designed and delivered governance frameworks for GDPR/CCPA compliance, access control, and standardized KPI definitions โ€” achieving 100% audit pass rates in regulated environments.
โฌ˜
M&A Data Integration
Managed data integrations across 10+ acquisitions at a global SaaS organization โ€” including ETL pipelines, data custody, scalable acquisition intelligence, and cross-platform analytics alignment.
09

Education & Credentials

2019
M.S. Information Systems
University of Maryland ยท Robert H. Smith School of Business ยท USA
2016
B.Tech Computer Science
NMIMS ยท India
Certifications
โ€” AI for Product Management โ€” Pendo
โ€” Certified Scrum Product Owner (CSPOยฎ) โ€” Scrum Alliance
โ€” Digital Product Management: Modern Fundamentals โ€” Coursera

Ideas worth
discussing.

I enjoy conversations with founders, operators, and product leaders working on hard data and infrastructure problems. If something here resonated โ€” or you're navigating a data platform, analytics, or ML productization challenge โ€” reach out.

โ†—
โ†—
Urvash
Chat with Urvash's AI
UC
Urvash's AI
Online
Hi! I'm Urvash's AI. Ask me anything about his background, work, or what he's looking for โ€” I'll give you the real story.
Try asking: