We are seeking a DataOps Engineer to join Tech Delivery and Infrastructure Operations teams playing a key role in ensuring the reliability automation and performance of our analytics and data platforms. This role is primarily DataOps-focused combining elements of DevOps and SRE to sustain and optimize data-driven environments across global business units.
You will manage end-to-end data operations from SQL diagnostics and data pipeline reliability to automation monitoring and deployment of analytics workloads on cloud platforms. Youll collaborate with Data Engineering Product and Infrastructure teams to maintain scalable secure and high-performing systems.
Key Responsibilities
- Manage and support data pipelines ETL processes and analytics platforms ensuring reliability accuracy and accessibility
- Execute data validation quality checks and performance tuning using SQL and Python/Shell scripting
- Implement monitoring and observability using Datadog Grafana and Prometheus to track system health and performance
- Collaborate with DevOps and Infra teams to integrate data deployments within CI/CD pipelines (Jenkins Azure DevOps Git)
- Apply infrastructure-as-code principles (Terraform Ansible) for provisioning and automation of data environments
- Support incident and request management via ServiceNow ensuring SLA adherence and root cause analysis
- Work closely with security and compliance teams to maintain data governance and protection standards
- Participate in Agile ceremonies within Scrum/Kanban models to align with cross-functional delivery squads
Required Skills & Experience
- 6 years in DataOps Data Engineering Operations or Analytics Platform Support with good exposure to DevOps/SRE practices
- Proficiency in SQL and Python/Shell scripting for automation and data diagnostics
- Experience with cloud platforms (AWS mandatory; exposure to Azure/GCP a plus)
- Familiarity with CI/CD tools (Jenkins Azure DevOps) version control (Git) and IaC frameworks (Terraform Ansible) - Working knowledge of monitoring tools (Datadog Grafana Prometheus)
- Understanding of containerization (Docker Kubernetes) concepts
- Strong grasp of data governance observability and quality frameworks
- Experience in incident management and operational metrics tracking (MTTR uptime latency)
Qualifications :
Must have Skills: Python (Strong) SQL (Strong) DevOps - AWS (Strong) DevOps - Azure (Strong) DataDog.
Remote Work :
Yes
Employment Type :
Full-time
We are seeking a DataOps Engineer to join Tech Delivery and Infrastructure Operations teams playing a key role in ensuring the reliability automation and performance of our analytics and data platforms. This role is primarily DataOps-focused combining elements of DevOps and SRE to sustain and optimi...
We are seeking a DataOps Engineer to join Tech Delivery and Infrastructure Operations teams playing a key role in ensuring the reliability automation and performance of our analytics and data platforms. This role is primarily DataOps-focused combining elements of DevOps and SRE to sustain and optimize data-driven environments across global business units.
You will manage end-to-end data operations from SQL diagnostics and data pipeline reliability to automation monitoring and deployment of analytics workloads on cloud platforms. Youll collaborate with Data Engineering Product and Infrastructure teams to maintain scalable secure and high-performing systems.
Key Responsibilities
- Manage and support data pipelines ETL processes and analytics platforms ensuring reliability accuracy and accessibility
- Execute data validation quality checks and performance tuning using SQL and Python/Shell scripting
- Implement monitoring and observability using Datadog Grafana and Prometheus to track system health and performance
- Collaborate with DevOps and Infra teams to integrate data deployments within CI/CD pipelines (Jenkins Azure DevOps Git)
- Apply infrastructure-as-code principles (Terraform Ansible) for provisioning and automation of data environments
- Support incident and request management via ServiceNow ensuring SLA adherence and root cause analysis
- Work closely with security and compliance teams to maintain data governance and protection standards
- Participate in Agile ceremonies within Scrum/Kanban models to align with cross-functional delivery squads
Required Skills & Experience
- 6 years in DataOps Data Engineering Operations or Analytics Platform Support with good exposure to DevOps/SRE practices
- Proficiency in SQL and Python/Shell scripting for automation and data diagnostics
- Experience with cloud platforms (AWS mandatory; exposure to Azure/GCP a plus)
- Familiarity with CI/CD tools (Jenkins Azure DevOps) version control (Git) and IaC frameworks (Terraform Ansible) - Working knowledge of monitoring tools (Datadog Grafana Prometheus)
- Understanding of containerization (Docker Kubernetes) concepts
- Strong grasp of data governance observability and quality frameworks
- Experience in incident management and operational metrics tracking (MTTR uptime latency)
Qualifications :
Must have Skills: Python (Strong) SQL (Strong) DevOps - AWS (Strong) DevOps - Azure (Strong) DataDog.
Remote Work :
Yes
Employment Type :
Full-time
View more
View less