We are looking for a skilled and motivated Data Platform / DevOps Engineer to operate and evolve our Global Data this role you will work with modern distributed systems and data technologies ensuring reliable secure and scalable data pipelines. You will apply Agile and DevSecOps principles to continuously improve platform stability automation and delivery efficiency while collaborating closely with security engineering and cloud operations teams.
Operate and maintain Global Data Platform components including:
VM servers Kubernetes clusters and Kafka
Data and analytics applications such as Apache stack Collibra Dataiku and similar tools
Design and implement automation for:
Infrastructure provisioning
Security components
CI/CD pipelines supporting ELT/ETL data workflows
Build resiliency into data pipelines through:
Platform health checks
Monitoring and alerting mechanisms
Proactive issue prevention and recovery strategies
Apply Agile and DevSecOps practices to deliver integrated solutions in iterative increments
Collaborate and liaise with:
Enterprise Security
Digital Engineering
Cloud Operations teams
Review system issues incidents and alerts to:
Perform root cause analysis
Drive long-term fixes and platform improvements
Stay current with industry trends emerging technologies and best practices in data platforms and DevOps
Minimum 5 years of experience designing and supporting large-scale distributed systems
Hands-on experience with:
Streaming and file-based ingestion (e.g. Kafka Control-M AWA)
DevOps and CI/CD tooling (e.g. Jenkins Octopus; Ansible Chef XL tools are a plus)
Experience with on-premises Big Data architectures; cloud migration experience is an advantage
Integration of Data Science workbenches (e.g. Dataiku or similar)
Practical experience working in Agile environments (Scrum SAFe)
Supporting enterprise reporting and data science use cases
Strong knowledge of modern data architectures:
Data lakes Delta Lakes Data Meshes and data platforms
Experience with distributed and cloud-native technologies:
S3 Parquet Kafka Kubernetes Spark
Programming and scripting skills:
Python (required)
Java / Scala / R
Linux scripting Jinja Puppet
Infrastructure and platform engineering:
VM setup and administration
Kubernetes scaling and operations
Docker Harbor
CI/CD pipelines
Firewall rules and security controls
Required Skills:
We are looking for a skilled and motivated Data Platform / DevOps Engineer to operate and evolve our Global Data this role you will work with modern distributed systems and data technologies ensuring reliable secure and scalable data pipelines. You will apply Agile and DevSecOps principles to continuously improve platform stability automation and delivery efficiency while collaborating closely with security engineering and cloud operations teams. Key Responsibilities Operate and maintain Global Data Platform components including: VM servers Kubernetes clusters and Kafka Data and analytics applications such as Apache stack Collibra Dataiku and similar tools Design and implement automation for: Infrastructure provisioning Security components CI/CD pipelines supporting ELT/ETL data workflows Build resiliency into data pipelines through: Platform health checks Monitoring and alerting mechanisms Proactive issue prevention and recovery strategies Apply Agile and DevSecOps practices to deliver integrated solutions in iterative increments Collaborate and liaise with: Enterprise Security Digital Engineering Cloud Operations teams Review system issues incidents and alerts to: Perform root cause analysis Drive long-term fixes and platform improvements Stay current with industry trends emerging technologies and best practices in data platforms and DevOps
IT Services and IT Consulting