Step into a high-impact Data Engineer role where youll architect and run large-scale data pipelines across AWS and enterprise-grade data platforms. This is your chance to shape the data backbone powering analytics operational systems and business intelligence for a global automotive leader.
Own complex data ingestion transformation and provisioning streams-championing security compliance and data quality across enterprise-wide data assets. If you thrive in environments where data engineering meets strategy this is your arena.
What makes this role stand out:
Deep technical work with Python PySpark Terraform AWS & Big Data frameworks
Flexibility through a hybrid setup 1960 annual flexible working hours
Act as a technical mentor while owning key components of the enterprise data platform
Youll focus on delivering organisation-wide data products spanning DGOs and Data Assets handling secure data ingestion for priority Enterprise D&A use cases (TOP20) and enabling reliable data provisioning for critical operational processes.
POSITION: Contract: 01 February 2026 31 December 2028
EXPERIENCE: 8 years related experience
COMMENCEMENT: 01 February 2026
LOCATION: Hybrid: Midrand/Menlyn/Rosslyn/Home Office rotation
TEAM: Data Science and Engineering - Enterprise Data & Analytics
The product focuses on creation and provisioning of enterprise-wide data spanning DGOs and Data Assets including data protection and other compliance and security aspects. This includes data ingests for the Enterprise D&A Use Cases (TOP20) and data provisioning for operational processes.
Qualifications/Experience Minimum mandatory qualifications:
- Relevant IT / Business / Engineering Degree
Certifications (Preferred): Candidates with one or more of the following certifications are preferred:
- AWS Certified Cloud Practitioner
- AWS Certified SysOps Associate
- AWS Certified Developer Associate
- AWS Certified Architect Associate
- AWS Certified Architect Professional
- HashiCorp Certified Terraform Associate
Minimum mandatory experience:
- Above average experience/understanding in data engineering and Big Data pipelines
- Experience in working with Enterprise Collaboration tools such as Confluence JIRA
- Experience developing technical documentation and artefacts
- Knowledge of data formats such as Parquet AVRO JSON XML CSV
- Knowledge of the Agile Working Model
Essential Skills Requirements
Programming & Scripting:
- Python 3.x (above average experience)
- PySpark
- PowerShell / Bash
- Boto3
Infrastructure as Code:
- Terraform (above average experience)
Databases & Data Processing:
- SQL - Oracle/PostgreSQL (above average experience)
- ETL (above average experience)
- Big Data (above average experience)
- Technical data modelling and schema design (not drag and drop)
AWS Cloud Services:
- Group Cloud Data Hub (CDH)
- Group CDEC Blueprint
- AWS Glue
- CloudWatch
- SNS (Simple Notification Service)
- Athena
- S3
- Kinesis Streams (Kinesis Kinesis Firehose)
- Lambda
- DynamoDB
- Step Function
- Param Store
- Secrets Manager
- Code Build/Pipeline
- CloudFormation
- AWS EMR
- Redshift
Big Data Technologies:
Containerization & Operating Systems:
Analytics:
- Business Intelligence (BI) Experience
Soft Skills:
- Self-driven team player with ability to work independently and multi-task
- Strong written and verbal communication skills with precise documentation
- Strong organizational skills
- Ability to work collaboratively in a team environment
- Problem-solving capabilities
- Above-board work ethics
Advantageous Skills Requirements - Demonstrate expertise in data modelling Oracle SQL
- Exceptional analytical skills analysing large and complex data sets
- Perform thorough testing and data validation to ensure the accuracy of data transformations
- Strong written and verbal communication skills with precise documentation
- Self-driven team player with ability to work independently and multi-task
- Experience building data pipeline using AWS Glue or Data Pipeline or similar platforms
- Familiar with data store such as AWS S3 and AWS RDS or DynamoDB
- Experience and solid understanding of various software design patterns
- Experience preparing specifications from which programs will be written designed coded tested and debugged
- Strong organizational skills
- Experience working with Data Quality Tools such as Great Expectations
- Experience developing and working with REST APIs
- Basic experience in Networking and troubleshooting network issues
Role Requirements Data Pipeline Development:
- Build and maintain Big Data Pipelines using Group Data Platforms
- Design develop and optimize ETL processes for large-scale data ingestion and transformation
- Implement data pipelines using AWS Glue Lambda Step Functions and other AWS services
Data Governance & Security:
- Act as custodian of data and ensure that data is shared in line with information classification requirements on a need-to-know basis
- Ensure data protection and compliance with security aspects
- Implement data quality checks and validation processes
Technical Leadership:
- Mentor train and upskill members in the team
- Provide technical guidance on data architecture and best practices
- Review and approve technical designs and implementations
Innovation & Improvement:
- Stay up to date with the latest data engineering tools technologies and industry trends
- Identify opportunities for process improvements and automation to enhance the efficiency and reliability of data pipelines
- Explore and evaluate new data engineering approaches and technologies to drive innovation within the organisation
Data Modelling & Architecture:
- Design and implement technical data models and schema designs
- Ensure scalability and performance of data solutions
- Create and maintain data architecture documentation
Collaboration:
- Work with cross-functional teams to gather requirements and deliver data solutions
- Collaborate with stakeholders to understand data needs and use cases
- Support Enterprise D&A Use Cases (TOP20) and operational processes
Testing & Quality Assurance:
- Perform thorough testing and data validation to ensure accuracy of data transformations
- Implement automated testing frameworks for data pipelines
- Monitor data pipeline performance and troubleshoot issues
Documentation:
- Develop technical documentation and artefacts
- Create and maintain runbooks and operational procedures
- Document data lineage and metadata
NB:
South African citizens / residents are preferred. Applicants with valid work permits will also be considered. By applying you consent to be added to the database and to receive updates until you unsubscribe. If you do not receive a response within 2 weeks please consider your application unsuccessful.
#isanqa #DataEngineer #Expert #AWS #Python #PySpark #BigData #Terraform #DataPipelines #CloudEngineering #ITHub #NowHiring #fuelledbypassionintegrityexcellence
iSanqa is your trusted Level 2 BEE recruitment partner dedicated to continuous improvement in delivering exceptional service. Specializing in seamless placements for permanent staff temporary resources and efficient contract management and billing facilitation iSanqa Resourcing is powered by a team of professionals with an outstanding track record. With over 100 years of combined experience we are committed to evolving our practices to ensure ongoing excellence.
Step into a high-impact Data Engineer role where youll architect and run large-scale data pipelines across AWS and enterprise-grade data platforms. This is your chance to shape the data backbone powering analytics operational systems and business intelligence for a global automotive leader. Own comp...
Step into a high-impact Data Engineer role where youll architect and run large-scale data pipelines across AWS and enterprise-grade data platforms. This is your chance to shape the data backbone powering analytics operational systems and business intelligence for a global automotive leader.
Own complex data ingestion transformation and provisioning streams-championing security compliance and data quality across enterprise-wide data assets. If you thrive in environments where data engineering meets strategy this is your arena.
What makes this role stand out:
Deep technical work with Python PySpark Terraform AWS & Big Data frameworks
Flexibility through a hybrid setup 1960 annual flexible working hours
Act as a technical mentor while owning key components of the enterprise data platform
Youll focus on delivering organisation-wide data products spanning DGOs and Data Assets handling secure data ingestion for priority Enterprise D&A use cases (TOP20) and enabling reliable data provisioning for critical operational processes.
POSITION: Contract: 01 February 2026 31 December 2028
EXPERIENCE: 8 years related experience
COMMENCEMENT: 01 February 2026
LOCATION: Hybrid: Midrand/Menlyn/Rosslyn/Home Office rotation
TEAM: Data Science and Engineering - Enterprise Data & Analytics
The product focuses on creation and provisioning of enterprise-wide data spanning DGOs and Data Assets including data protection and other compliance and security aspects. This includes data ingests for the Enterprise D&A Use Cases (TOP20) and data provisioning for operational processes.
Qualifications/Experience Minimum mandatory qualifications:
- Relevant IT / Business / Engineering Degree
Certifications (Preferred): Candidates with one or more of the following certifications are preferred:
- AWS Certified Cloud Practitioner
- AWS Certified SysOps Associate
- AWS Certified Developer Associate
- AWS Certified Architect Associate
- AWS Certified Architect Professional
- HashiCorp Certified Terraform Associate
Minimum mandatory experience:
- Above average experience/understanding in data engineering and Big Data pipelines
- Experience in working with Enterprise Collaboration tools such as Confluence JIRA
- Experience developing technical documentation and artefacts
- Knowledge of data formats such as Parquet AVRO JSON XML CSV
- Knowledge of the Agile Working Model
Essential Skills Requirements
Programming & Scripting:
- Python 3.x (above average experience)
- PySpark
- PowerShell / Bash
- Boto3
Infrastructure as Code:
- Terraform (above average experience)
Databases & Data Processing:
- SQL - Oracle/PostgreSQL (above average experience)
- ETL (above average experience)
- Big Data (above average experience)
- Technical data modelling and schema design (not drag and drop)
AWS Cloud Services:
- Group Cloud Data Hub (CDH)
- Group CDEC Blueprint
- AWS Glue
- CloudWatch
- SNS (Simple Notification Service)
- Athena
- S3
- Kinesis Streams (Kinesis Kinesis Firehose)
- Lambda
- DynamoDB
- Step Function
- Param Store
- Secrets Manager
- Code Build/Pipeline
- CloudFormation
- AWS EMR
- Redshift
Big Data Technologies:
Containerization & Operating Systems:
Analytics:
- Business Intelligence (BI) Experience
Soft Skills:
- Self-driven team player with ability to work independently and multi-task
- Strong written and verbal communication skills with precise documentation
- Strong organizational skills
- Ability to work collaboratively in a team environment
- Problem-solving capabilities
- Above-board work ethics
Advantageous Skills Requirements - Demonstrate expertise in data modelling Oracle SQL
- Exceptional analytical skills analysing large and complex data sets
- Perform thorough testing and data validation to ensure the accuracy of data transformations
- Strong written and verbal communication skills with precise documentation
- Self-driven team player with ability to work independently and multi-task
- Experience building data pipeline using AWS Glue or Data Pipeline or similar platforms
- Familiar with data store such as AWS S3 and AWS RDS or DynamoDB
- Experience and solid understanding of various software design patterns
- Experience preparing specifications from which programs will be written designed coded tested and debugged
- Strong organizational skills
- Experience working with Data Quality Tools such as Great Expectations
- Experience developing and working with REST APIs
- Basic experience in Networking and troubleshooting network issues
Role Requirements Data Pipeline Development:
- Build and maintain Big Data Pipelines using Group Data Platforms
- Design develop and optimize ETL processes for large-scale data ingestion and transformation
- Implement data pipelines using AWS Glue Lambda Step Functions and other AWS services
Data Governance & Security:
- Act as custodian of data and ensure that data is shared in line with information classification requirements on a need-to-know basis
- Ensure data protection and compliance with security aspects
- Implement data quality checks and validation processes
Technical Leadership:
- Mentor train and upskill members in the team
- Provide technical guidance on data architecture and best practices
- Review and approve technical designs and implementations
Innovation & Improvement:
- Stay up to date with the latest data engineering tools technologies and industry trends
- Identify opportunities for process improvements and automation to enhance the efficiency and reliability of data pipelines
- Explore and evaluate new data engineering approaches and technologies to drive innovation within the organisation
Data Modelling & Architecture:
- Design and implement technical data models and schema designs
- Ensure scalability and performance of data solutions
- Create and maintain data architecture documentation
Collaboration:
- Work with cross-functional teams to gather requirements and deliver data solutions
- Collaborate with stakeholders to understand data needs and use cases
- Support Enterprise D&A Use Cases (TOP20) and operational processes
Testing & Quality Assurance:
- Perform thorough testing and data validation to ensure accuracy of data transformations
- Implement automated testing frameworks for data pipelines
- Monitor data pipeline performance and troubleshoot issues
Documentation:
- Develop technical documentation and artefacts
- Create and maintain runbooks and operational procedures
- Document data lineage and metadata
NB:
South African citizens / residents are preferred. Applicants with valid work permits will also be considered. By applying you consent to be added to the database and to receive updates until you unsubscribe. If you do not receive a response within 2 weeks please consider your application unsuccessful.
#isanqa #DataEngineer #Expert #AWS #Python #PySpark #BigData #Terraform #DataPipelines #CloudEngineering #ITHub #NowHiring #fuelledbypassionintegrityexcellence
iSanqa is your trusted Level 2 BEE recruitment partner dedicated to continuous improvement in delivering exceptional service. Specializing in seamless placements for permanent staff temporary resources and efficient contract management and billing facilitation iSanqa Resourcing is powered by a team of professionals with an outstanding track record. With over 100 years of combined experience we are committed to evolving our practices to ensure ongoing excellence.
View more
View less