We are seeking an experienced Data Engineer to design build and maintain scalable secure data pipelines and platforms that enable data-driven decision-making. The role involves working with modern cloud and big data technologies to transform raw data into high-quality analytics- and AI-ready datasets.
Requirements
Key Responsibilities
Design and maintain scalable ETL/ELT pipelines using Python SQL and Spark
Build modern data architectures (Data Lake / Lakehouse Medallion)
Optimise cloud-based data platforms on AWS and/or Azure
Implement data quality governance and security standards
Collaborate with Data Scientists Analysts and Engineers to deliver reliable datasets
Support CI/CD automation and performance monitoring of data pipelines
Skills & Experience
5 years experience in data engineering (Databricks experience required)
Strong Python and SQL skills
Advanced experience with Apache Spark (PySpark)
Workflow orchestration using Airflow ADF Prefect or similar
Experience with cloud data warehouses (e.g. Snowflake)
Hands-on experience with streaming technologies (Kafka Kinesis or Event Hubs)
Familiarity with data quality frameworks and governance principles
Experience delivering data to BI tools (Power BI Tableau Looker)
Exposure to AI/ML or GenAI data use cases is advantageous
Tech Stack
Certifications (Advantageous)
Required Skills:
AWS AZURE DATABRICKS PYTHON AI ML
Required Education:
AWS:AWS Certified Data Engineer Associate (DEA-C01)AWS Certified Solutions Architect AssociateAzure:Microsoft Certified: Azure Data Engineer Associate (DP-203)Microsoft Certified: Azure Solutions Architect ExpertDatabricks:Databricks Certified Data Engineer ProfessionalDatabricks Certified Data Engineer Associate
We are seeking an experienced Data Engineer to design build and maintain scalable secure data pipelines and platforms that enable data-driven decision-making. The role involves working with modern cloud and big data technologies to transform raw data into high-quality analytics- and AI-ready dataset...
We are seeking an experienced Data Engineer to design build and maintain scalable secure data pipelines and platforms that enable data-driven decision-making. The role involves working with modern cloud and big data technologies to transform raw data into high-quality analytics- and AI-ready datasets.
Requirements
Key Responsibilities
Design and maintain scalable ETL/ELT pipelines using Python SQL and Spark
Build modern data architectures (Data Lake / Lakehouse Medallion)
Optimise cloud-based data platforms on AWS and/or Azure
Implement data quality governance and security standards
Collaborate with Data Scientists Analysts and Engineers to deliver reliable datasets
Support CI/CD automation and performance monitoring of data pipelines
Skills & Experience
5 years experience in data engineering (Databricks experience required)
Strong Python and SQL skills
Advanced experience with Apache Spark (PySpark)
Workflow orchestration using Airflow ADF Prefect or similar
Experience with cloud data warehouses (e.g. Snowflake)
Hands-on experience with streaming technologies (Kafka Kinesis or Event Hubs)
Familiarity with data quality frameworks and governance principles
Experience delivering data to BI tools (Power BI Tableau Looker)
Exposure to AI/ML or GenAI data use cases is advantageous
Tech Stack
Certifications (Advantageous)
Required Skills:
AWS AZURE DATABRICKS PYTHON AI ML
Required Education:
AWS:AWS Certified Data Engineer Associate (DEA-C01)AWS Certified Solutions Architect AssociateAzure:Microsoft Certified: Azure Data Engineer Associate (DP-203)Microsoft Certified: Azure Solutions Architect ExpertDatabricks:Databricks Certified Data Engineer ProfessionalDatabricks Certified Data Engineer Associate
View more
View less