At Klaviyo we value the unique backgrounds experiences and perspectives each Klaviyo (we call ourselves Klaviyos) brings to our workplace each and every day. We believe everyone deserves a fair shot at success and appreciate the experiences each person brings beyond the traditional job requirements. If youre a close but not exact match with the description we hope youll still consider applying. Want to learn more about life at Klaviyo Visit to see how we empower creators to own their own destiny.
Data is at the heart of every decision made at Klaviyo and were looking for a Senior Data Engineer to join our Business Intelligence (BI) team. BI at Klaviyo collaborates across all departments to provide a platform that powers all internal data analytic and reporting needs. Our mission is to champion data-driven value creation and you will own creating and maintaining the internal data infrastructure that powers Klaviyos business. This role in particular will significantly contribute to the infrastructure pipelines and security/compliance aspects of our internal analytics platform while driving architectural innovation and mentoring the team.
How Youll Make a Difference
As a Senior Data Engineer you will shape the scalability reliability and cost-efficiency of our data platform. Youll lead architectural decisions establish engineering best practices and mentor other engineers while partnering closely with analytics engineering and business stakeholders.
Your work will directly influence data-driven decision-making across the organization by ensuring our data systems are performant observable and built to scale.
What Youll Do (Responsibilities)
Accelerating Engineering with AI
- Transform workflows by putting AI at the center building smarter systems and ways of working from the ground up for example using AI to generate tests detect anomalies summarize data issues or accelerate analysis.
Data Architecture & Optimization
- Design develop and maintain scalable dbt models and pipelines including advanced incremental and merge strategies.
- Architect solutions for attribution models event data pipelines and analytics at scale.
- Lead performance optimization efforts across Snowflake and related data systems.
- Define and enforce best practices for query performance warehouse management and cost control.
Pipeline & Platform Ownership
- Own end-to-end data pipelines ensuring reliability scalability and observability.
- Lead complex DAG orchestration with Airflow/MWAA.
- Oversee Spark/EMR cluster management job optimization and large-scale backfills.
- Implement monitoring alerting and automated recovery strategies for production systems.
Infrastructure & DevOps Leadership
- Architect infrastructure-as-code solutions using Terraform for Snowflake and AWS resources.
- Oversee integration of AWS services (S3 EMR Secrets Manager CloudWatch) into the data platform.
- Guide CI/CD pipeline design and improvements using GitHub Actions and CodeBuild.
- Promote containerization best practices with Docker for scalable deployments.
Cost & Performance Management
- Monitor Snowflake and EMR usage to proactively optimize costs.
- Analyze query performance and warehouse efficiency.
- Troubleshoot and resolve pipeline and infrastructure performance issues.
Leadership & Mentorship
- Mentor and coach junior and mid-level data engineers through code reviews and technical guidance.
- Establish and enforce coding standards testing practices and CI/CD processes.
- Serve as technical lead for cross-functional data initiatives.
- Advocate for reliability performance and cost optimization across the data engineering function.
Who You Are (Qualifications)
- 5 years of data engineering experience including demonstrated technical leadership.
- Expert-level proficiency in dbt including advanced modeling testing frameworks incremental strategies and performance tuning.
- Deep expertise in SQL and Snowflake including query optimization warehouse sizing and cost governance.
- Strong Python skills for data processing API integrations and internal tooling.
- Experience architecting data lakehouse solutions.
- Hands-on experience designing and operating Apache Iceberg-based data lake architectures on Amazon EMR.
- Proven experience operating production systems with a strong focus on reliability and cost efficiency.
- Demonstrated experience leveraging AI to improve personal and team workflows.
- Strong problem-solving skills and an operational mindset focused on SLAs and production stability.
- Ability to align technical decisions with business priorities.
- Youve already experimented with AI in work or personal projects and youre excited to dive in and learn fast. Youre hungry to responsibly explore new AI tools and workflows finding ways to make your work smarter and more efficient.
Nice to Haves
- Expertise in Spark/EMR performance optimization and scaling strategies.
- Advanced Terraform usage across multi-environment infrastructure.
- Extensive experience with Airflow/MWAA orchestration at scale.
- Strong Docker and container orchestration experience.
- Experience architecting AI-driven workflows including multi-agent systems concurrent execution models and tool-augmented agents.
- Domain experience in:
- Marketing attribution modeling and analytics data flows
- Event data ingestion transformation and large-scale aggregation
- Data warehouse governance optimization and cost modeling
- BI infrastructure and operational excellence
We use Covey as part of our hiring and / or promotional process. For jobs or candidates in NYC certain features may qualify it as an AEDT. As part of the evaluation process we provide Covey with job requirements and candidate submitted applications. We began using Covey Scout for Inbound on April 3 2025.
Please see the independent bias audit report covering our use of Covey here
Required Experience:
Senior IC
At Klaviyo we value the unique backgrounds experiences and perspectives each Klaviyo (we call ourselves Klaviyos) brings to our workplace each and every day. We believe everyone deserves a fair shot at success and appreciate the experiences each person brings beyond the traditional job requirements....
At Klaviyo we value the unique backgrounds experiences and perspectives each Klaviyo (we call ourselves Klaviyos) brings to our workplace each and every day. We believe everyone deserves a fair shot at success and appreciate the experiences each person brings beyond the traditional job requirements. If youre a close but not exact match with the description we hope youll still consider applying. Want to learn more about life at Klaviyo Visit to see how we empower creators to own their own destiny.
Data is at the heart of every decision made at Klaviyo and were looking for a Senior Data Engineer to join our Business Intelligence (BI) team. BI at Klaviyo collaborates across all departments to provide a platform that powers all internal data analytic and reporting needs. Our mission is to champion data-driven value creation and you will own creating and maintaining the internal data infrastructure that powers Klaviyos business. This role in particular will significantly contribute to the infrastructure pipelines and security/compliance aspects of our internal analytics platform while driving architectural innovation and mentoring the team.
How Youll Make a Difference
As a Senior Data Engineer you will shape the scalability reliability and cost-efficiency of our data platform. Youll lead architectural decisions establish engineering best practices and mentor other engineers while partnering closely with analytics engineering and business stakeholders.
Your work will directly influence data-driven decision-making across the organization by ensuring our data systems are performant observable and built to scale.
What Youll Do (Responsibilities)
Accelerating Engineering with AI
- Transform workflows by putting AI at the center building smarter systems and ways of working from the ground up for example using AI to generate tests detect anomalies summarize data issues or accelerate analysis.
Data Architecture & Optimization
- Design develop and maintain scalable dbt models and pipelines including advanced incremental and merge strategies.
- Architect solutions for attribution models event data pipelines and analytics at scale.
- Lead performance optimization efforts across Snowflake and related data systems.
- Define and enforce best practices for query performance warehouse management and cost control.
Pipeline & Platform Ownership
- Own end-to-end data pipelines ensuring reliability scalability and observability.
- Lead complex DAG orchestration with Airflow/MWAA.
- Oversee Spark/EMR cluster management job optimization and large-scale backfills.
- Implement monitoring alerting and automated recovery strategies for production systems.
Infrastructure & DevOps Leadership
- Architect infrastructure-as-code solutions using Terraform for Snowflake and AWS resources.
- Oversee integration of AWS services (S3 EMR Secrets Manager CloudWatch) into the data platform.
- Guide CI/CD pipeline design and improvements using GitHub Actions and CodeBuild.
- Promote containerization best practices with Docker for scalable deployments.
Cost & Performance Management
- Monitor Snowflake and EMR usage to proactively optimize costs.
- Analyze query performance and warehouse efficiency.
- Troubleshoot and resolve pipeline and infrastructure performance issues.
Leadership & Mentorship
- Mentor and coach junior and mid-level data engineers through code reviews and technical guidance.
- Establish and enforce coding standards testing practices and CI/CD processes.
- Serve as technical lead for cross-functional data initiatives.
- Advocate for reliability performance and cost optimization across the data engineering function.
Who You Are (Qualifications)
- 5 years of data engineering experience including demonstrated technical leadership.
- Expert-level proficiency in dbt including advanced modeling testing frameworks incremental strategies and performance tuning.
- Deep expertise in SQL and Snowflake including query optimization warehouse sizing and cost governance.
- Strong Python skills for data processing API integrations and internal tooling.
- Experience architecting data lakehouse solutions.
- Hands-on experience designing and operating Apache Iceberg-based data lake architectures on Amazon EMR.
- Proven experience operating production systems with a strong focus on reliability and cost efficiency.
- Demonstrated experience leveraging AI to improve personal and team workflows.
- Strong problem-solving skills and an operational mindset focused on SLAs and production stability.
- Ability to align technical decisions with business priorities.
- Youve already experimented with AI in work or personal projects and youre excited to dive in and learn fast. Youre hungry to responsibly explore new AI tools and workflows finding ways to make your work smarter and more efficient.
Nice to Haves
- Expertise in Spark/EMR performance optimization and scaling strategies.
- Advanced Terraform usage across multi-environment infrastructure.
- Extensive experience with Airflow/MWAA orchestration at scale.
- Strong Docker and container orchestration experience.
- Experience architecting AI-driven workflows including multi-agent systems concurrent execution models and tool-augmented agents.
- Domain experience in:
- Marketing attribution modeling and analytics data flows
- Event data ingestion transformation and large-scale aggregation
- Data warehouse governance optimization and cost modeling
- BI infrastructure and operational excellence
We use Covey as part of our hiring and / or promotional process. For jobs or candidates in NYC certain features may qualify it as an AEDT. As part of the evaluation process we provide Covey with job requirements and candidate submitted applications. We began using Covey Scout for Inbound on April 3 2025.
Please see the independent bias audit report covering our use of Covey here
Required Experience:
Senior IC
View more
View less