Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailNot Disclosed
Salary Not Disclosed
1 Vacancy
Data Platform & Architecture Development
Design implement and maintain scalable data platforms for efficient data storage processing and retrieval.
Build cloudnative and distributed data systems that enable selfservice analytics realtime data processing and AIdriven decisionmaking.
Develop data models schemas and transformation pipelines that support evolving business needs while ensuring operational stability.
Apply best practices in data modeling indexing and partitioning to optimize query performance cost efficiency considering best practices for Sustainability.
ETL Data Pipelines & Streaming Processing
Build and maintain highly efficient ETL pipelines using SQL Python to process largescale datasets.
Implement realtime data streaming pipelines using Kafka Apache Beam or equivalent technologies.
Develop reusable internal data processing tools to streamline operations and empower teams across the organization.
Write advanced SQL queries for extracting transforming and loading (ETL) data with a focus on efficiency.
Ensure data validation quality monitoring and governance using automated processes and dashboards.
MLOps & CloudBased Data Infrastructure
Deploy machine learning pipelines with MLOps best practices to support AI and predictive analytics applications.
Optimize data pipelines for ML models ensuring seamless integration between data engineering and machine learning workflows.
Work with cloud platforms (GCP) to manage data storage processing and security.
Utilize Terraform Jenkins CI/CD tools to automate data pipeline deployments and infrastructure management.
Collaboration & Agile Development
Work in Agile/DevOps teams collaborating closely with data scientists software engineers and business stakeholders.
Advocate for datadriven decisionmaking educating teams on best practices in data architecture and engineering.
5 years of experience as a Data Engineer working with largescale data processing.
Strong proficiency in SQL for data transformation optimization and analytics.
Expertise in programming languages (Python Java Scala or Go) with an understanding of functional and objectoriented programming paradigms.
Experience with distributed computing frameworks.
Proficiency in cloudbased data engineering on AWS GCP or Azure.
Strong knowledge of data modeling data governance and schema design.
Experience with CI/CD tools (Jenkins Terraform) for infrastructure automation.
Experience with realtime data streaming (Kafka or equivalent).
Strong understanding of MLOps and integrating data engineering with ML pipelines.
Familiarity with knowledge graphs (including Neo4j tool) and GraphQL APIs for data relationships.
Background in retail customer classification and personalization systems.
Knowledge of business intelligence tools and visualization platforms.
Qualifications :
Degree required
Additional Information :
All your information will be kept confidential according to EEO guidelines.
Remote Work :
No
Employment Type :
Contract
Contract