Lead GTM Data Operations Analyst, AI Workflows

Klaviyo

Not Interested
Bookmark
Report This Job

profile Job Location:

Boston, NH - USA

profile Monthly Salary: Not Disclosed
Posted on: 3 hours ago
Vacancies: 1 Vacancy

Job Summary

At Klaviyo we value the unique backgrounds experiences and perspectives each Klaviyo (we call ourselves Klaviyos) brings to our workplace each and every day. We believe everyone deserves a fair shot at success and appreciate the experiences each person brings beyond the traditional job requirements. If youre a close but not exact match with the description we hope youll still consider applying. Want to learn more about life at Klaviyo Visit see how we empower creators to own their own destiny.


Why This Role Why Now

GTM Data Strategy & Operations stood up from scratch with no predecessor. Today the function runs on three offshore contractors and zero FTEs managed by a single leader who is simultaneously building the agentic infrastructure operating it in production and driving major initiatives (hierarchy redesign data quality assessment vendor optimization).

The operating model is deliberately agentic AIfirst: a multi-agent pipeline (Cartographer Sentinel Resolver Reporting) handles detection enrichment hierarchy mapping and conflict resolution at scale. This is not a future-state vision these agents are live and processing enterprise account families in production today.

The problem: one person cannot build operate and extend this system while also managing strategic workstreams. The function currently covers only core Tier1 fields. Dozens of account contact and lead signals remain unaddressed. Every pipeline run every failure diagnosis and every offshore handoff flows through a single point of failure.

This role is the first onshore execution hire for an agent operator who can keep the system running improve it and extend detection and resolution coverage as GTM leadership prioritizes new data elements.

Role Summary

Sit between AI systems and GTM data. Operate tune and extend our agentic data quality pipeline (detection enrichment hierarchy mapping conflict resolution) so it runs reliably improves continuously and expands to cover more of the data landscape. Own the handoff between automated output and human review managing quality and throughput with our offshore team. You dont build agents from scratch but you run them evaluate their output with GTM data judgment and make them better.

Core Responsibilities

Agent Pipeline Operations

  • Run and monitor production pipeline sessions (Cartographer Sentinel Resolver) across scheduled cadences; diagnose and resolve failures (API errors session timeouts data anomalies) without escalating to the function lead.
  • Execute pipeline runs in Claude Claude and tmux; manage long-running batch processes; interpret logs and output to confirm data integrity before downstream handoff.
  • Maintain pipeline orchestration scripts and configuration; extend agent coverage as new data elements are prioritized by GTM leadership.

Agent Tuning & Improvement

  • Refine detection rules prompt logic and confidence thresholds based on output analysis and false-positive/negative patterns.
  • Evaluate agent accuracy by segment (Enterprise vs. MM/SMB) and recommend rule or workflow changes backed by evidence.
  • Run bake-offs (vendor vs. AI enrichment) to optimize cost coverage and accuracy; document results for decision-making.

Sentinel Offshore Resolution Loop

  • Own the handoff between Sentinel detection output and Concentrix triage queues; define queue structure priority tiers and resolution instructions.
  • Monitor offshore resolution quality and throughput; refine detection rules based on patterns surfaced through triage.
  • Close the feedback loop: track resolution outcomes back to agent configuration to reduce recurring false positives and improve detection precision.

Data Quality & Enrichment Operations

  • Maintain ops-only staging fields; manage the promote-to-production flow with audit controls.
  • Design and run AI-assisted enrichment workflows (Clay LLM prompts) with evidence links and confidence thresholds.
  • Monitor fill-rate sampled accuracy freshness and cost-per-record by source and segment; surface vendor performance issues and recommend changes.
  • Keep data dictionaries SOPs and runbooks current as agents and processes evolve.

Cross-Functional Partnership

  • GTM Systems (SFDC): field configuration permission sets automation flows.
  • Data Engineering: source availability ID mapping lineage (no pipeline coding).
  • Reporting: define metrics and acceptance criteria; partner on dashboard requirements.

What to Expect

This is a triage environment not a steady-state one. The function is young the data has known gaps and the work is to stabilize and extend not maintain and optimize. Youll be building the plane while flying it alongside a small team that operates with high autonomy and a bias toward measurable outcomes. If ambiguity and mess energize you this is the right fit.

Success Metrics (612 Months)

Pipeline Reliability

  • Scheduled pipeline runs execute without function-lead intervention; failure-to-resolution cycle time under 24 hours for non-blocking issues.
  • Agent coverage extended to new data elements as prioritized (measured by number of signals under active detection).

Detection & Resolution Quality

  • Sentinel detection precision and recall improve quarter over quarter tracked by segment.
  • Concentrix resolution queue throughput and accuracy meet defined acceptance thresholds.
  • False-positive rate decreases through feedback-loop refinement.

Data Quality Outcomes

  • Tier-1 field fill-rates: Country 95%; Vertical 90% at 85% sampled accuracy; Revenue bands 90%.
  • Hierarchy coverage 6580% across target segments.
  • Enterprise cost-per-record reduction of 3040% via AI-first selective vendor usage.

Qualifications

Required

  • 36 years in Data Ops Sales Ops or GTM Ops with hands-on data quality ownership for account and contact data.
  • Proficiency with Snowflake (SQL for querying analysis validation) and SFDC (object model field configuration data flows).
  • Working experience with Claude Code or comparable LLM-based tooling in an operational (not just experimental) context.
  • Experience designing and running AI-assisted enrichment workflows (e.g. Clay LLM prompts) and evaluating accuracy/coverage.
  • Comfort operating in a command-line environment: tmux shell scripts log analysis batch process monitoring.
  • Process design mindset with a bias toward measurable outcomes; strong written communication.

Strong Plus

  • Experience with account/contact data vendors (D&B ZoomInfo Clearbit StoreLeads) and waterfall enrichment logic.
  • Python for QA scripting sampling or light automation.
  • Familiarity with prompt engineering confidence scoring and AI guardrails (evidence capture versioned prompts QA sampling gates).

Tool Stack

  • Core: Snowflake (SQL) SFDC Claude Code Clay
  • Pipeline: Shell orchestration Cartographer / Sentinel / Resolver agents
  • Enrichment: D&B ZoomInfo Clearbit StoreLeads LLM prompts
  • Nice to Have: Python SOQL prompt engineering frameworks
  • AI Guardrails (Expected Practice):Confidence floors evidence capture versioned prompts 10% QA sampling gates audit-on-promote drift alerts and privacy/compliance checks. This role is expected to uphold and improve these practices not just follow them.




Required Experience:

IC

At Klaviyo we value the unique backgrounds experiences and perspectives each Klaviyo (we call ourselves Klaviyos) brings to our workplace each and every day. We believe everyone deserves a fair shot at success and appreciate the experiences each person brings beyond the traditional job requirements....
View more view more

About Company

Company Logo

Klaviyo unifies AI-powered email marketing and SMS to drive growth, retention, and measurable results. Build personalized, omnichannel experiences across WhatsApp, ecommerce, and more with K:AI Agents.

View Profile View Profile