REQUIREMENTS:
- Total experience: 7 years
- Extensive experience in DataOps Data Engineering Operations or Analytics Platform Support with strong exposure to DevOps/SRE practices.
- Proficiency in SQL and Python/Shell scripting for automation and data diagnostics.
- Hands-on experience with cloud platforms (AWS mandatory; Azure/GCP is a plus).
- Familiarity with CI/CD tools (Jenkins Azure DevOps) version control (Git) and Infrastructure-as-Code frameworks (Terraform Ansible).
- Working knowledge of monitoring tools (Datadog Grafana Prometheus).
- Understanding of containerization concepts (Docker Kubernetes).
- Strong grasp of data governance observability and quality frameworks.
- Experience in incident management and operational metrics tracking (MTTR uptime latency).
RESPONSIBILITIES:
- Manage and support data pipelines ETL processes and analytics platforms ensuring reliability accuracy and accessibility.
- Execute data validation quality checks and performance tuning using SQL and Python/Shell scripting.
- Implement monitoring and observability using Datadog Grafana and Prometheus to track system health and performance.
- Collaborate with DevOps and Infrastructure teams to integrate data deployments within CI/CD pipelines (Jenkins Azure DevOps Git).
- Apply Infrastructure-as-Code principles (Terraform Ansible) for provisioning and automation of data environments.
- Support incident and request management via ServiceNow ensuring SLA adherence and root cause analysis.
- Work closely with security and compliance teams to maintain data governance and protection standards.
- Participate in Agile ceremonies within Scrum/Kanban models to align with cross-functional delivery squads.
Qualifications :
Bachelors or masters degree in computer science Information Technology or a related field
Remote Work :
Yes
Employment Type :
Full-time
REQUIREMENTS:Total experience: 7 yearsExtensive experience in DataOps Data Engineering Operations or Analytics Platform Support with strong exposure to DevOps/SRE practices.Proficiency in SQL and Python/Shell scripting for automation and data diagnostics.Hands-on experience with cloud platforms (AWS...
REQUIREMENTS:
- Total experience: 7 years
- Extensive experience in DataOps Data Engineering Operations or Analytics Platform Support with strong exposure to DevOps/SRE practices.
- Proficiency in SQL and Python/Shell scripting for automation and data diagnostics.
- Hands-on experience with cloud platforms (AWS mandatory; Azure/GCP is a plus).
- Familiarity with CI/CD tools (Jenkins Azure DevOps) version control (Git) and Infrastructure-as-Code frameworks (Terraform Ansible).
- Working knowledge of monitoring tools (Datadog Grafana Prometheus).
- Understanding of containerization concepts (Docker Kubernetes).
- Strong grasp of data governance observability and quality frameworks.
- Experience in incident management and operational metrics tracking (MTTR uptime latency).
RESPONSIBILITIES:
- Manage and support data pipelines ETL processes and analytics platforms ensuring reliability accuracy and accessibility.
- Execute data validation quality checks and performance tuning using SQL and Python/Shell scripting.
- Implement monitoring and observability using Datadog Grafana and Prometheus to track system health and performance.
- Collaborate with DevOps and Infrastructure teams to integrate data deployments within CI/CD pipelines (Jenkins Azure DevOps Git).
- Apply Infrastructure-as-Code principles (Terraform Ansible) for provisioning and automation of data environments.
- Support incident and request management via ServiceNow ensuring SLA adherence and root cause analysis.
- Work closely with security and compliance teams to maintain data governance and protection standards.
- Participate in Agile ceremonies within Scrum/Kanban models to align with cross-functional delivery squads.
Qualifications :
Bachelors or masters degree in computer science Information Technology or a related field
Remote Work :
Yes
Employment Type :
Full-time
View more
View less