Our client is seeking an Advanced Data Engineer to join the Cloud Databases & Services team.
This product group is focused on establishing central database management by implementing fully managed solutions and reference architectures across both on-premise and public cloud environments.
You will be instrumental in building standardized templates and offering expert database services across a variety of cloud technologies.
If you are a cloud-data specialist who excels at standardization and building scalable database building blocks this is your next challenge.
4-6 Years Experience in Data Engineering
Hands-on Mastery of Python PySpark and AWS/OCI
Cloud Database Management & Reference Architectures
Position Details:
- Contract Start Date:
- Contract End Date:
- Location: Midrand/Menlyn/Rosslyn/Home Office Rotation
The Mission: You will be responsible for developing and implementing fully managed database solutions and standardized templates for global use.
Your role involves collaborating with cross-functional teams to design reference architectures that bridge the gap between on-premise systems and the public cloud.
You will act as a database expert providing specialized services and ensuring the sustainability of cloud data technologies.
Qualifications & Experience:
- Education: Relevant IT Computer Science or Engineering degree (Advanced degrees are advantageous).
- Experience: Minimum 3 5 years of demonstrated experience as a Data Engineer.
- Technical Core: Hands-on expertise in Python and PySpark is essential.
- Certifications: AWS Certified Cloud Practitioner Oracle Cloud certifications or relevant data engineering certifications are preferred.
Essential Skills (Verified):
- Strong experience with Python (Python 3.x) and PySpark for developing data processing jobs.
- At least 3 years experience with AWS services commonly used by data engineers such as Athena Glue Lambda S3 and ECS.
- Hands-on experience with NoSQL databases such as DynamoDB and relational databases (Oracle/PostgreSQL) including strong Oracle SQL skills.
- Experience with Oracle Cloud Infrastructure (OCI) services and tooling for databases storage and data processing.
- Expertise in data formats and schema design including Parquet AVRO JSON XML and CSV and technical data modelling ( not drag and drop ).
- ETL and data pipeline development experience including building pipelines with AWS Glue or similar platforms.
- Experience with containerization and orchestration technologies such as Docker (Kubernetes/OpenShift
- advantageous).
- Proficiency with scripting for automation (Bash PowerShell) and familiarity with Linux/Unix environments.
- Experience with data quality tooling and validation (e.g. Great Expectations) and performing thorough data testing and validation.
- Familiarity with cloud infrastructure as code and DevOps tools such as Terraform CloudFormation CI/CD pipelines Git and Jenkins.
Advantageous Skills:
- Knowledge of Kafka or other streaming technologies and AWS Kinesis for real-time data ingestion.
- Experience with AWS Redshift EMR and other analytics/warehouse technologies.
- Familiarity with GROUP Cloud Data Hub (CDH) or similar organizational cloud data blueprints.
- Java / JEE experience and understanding of Java application servers.
- Experience with monitoring and observability tools such as CloudWatch and Grafana.
- AWS solution architecture experience and certifications (e.g. AWS Certified Cloud Practitioner) are advantageous.
- Familiarity with REST APIs and building integrations with external systems.
- Experience with schema design for BI and data warehousing and preparing specifications for development.
- Experience with MongoDB or other NoSQL stores.
- Familiarity with Agile/Scrum delivery models and working within cross-functional teams.
Key Responsibilities:
- Design build and maintain scalable data pipelines and ETL workflows to ingest and transform data for analytics and reporting.
- Implement and optimize data storage solutions including data lakes and data warehouses on cloud platforms.
- Develop PySpark and Python applications for large-scale data processing and transformations.
- Ensure data quality consistency and integrity through testing validation and the use of data quality tools.
- Collaborate with stakeholders to translate business requirements into technical specifications and data models.
- Propose and review system and solution designs and evaluate technical alternatives.
- Maintain and operate cloud infrastructure and CI/CD pipelines for data platform components.
- Create and maintain technical documentation runbooks and artefacts for developed solutions.
- Support production troubleshooting monitoring and incident management for data services.
- Work closely with BI teams to prepare and optimize data for reporting tools such as Business Objects or Tableau.
- Coach and support fellow engineers and help improve team capability through knowledge sharing and training.
- Participate in Agile ceremonies and contribute to continuous improvement of delivery processes.
Important Application Details Location & Relocation Applicants based outside of Gauteng must be willing to relocate. Please note that relocation to the province will be at the candidates own cost.
Eligibility & Legal
- Citizenship: South African citizens and residents are preferred.
- Work Permits: Candidates with valid work permits will be considered.
- Privacy: By applying you consent to being added to our database and receiving updates until you unsubscribe.
Application Status If you do not receive a response within 2 weeks please consider your application unsuccessful.
#isanqa #isanqaresourcing #fuelledbypassionintegrityexcellence #DataEngineer #CloudDatabases #Python #PySpark #AWS #OracleCloud #CloudArchitecture #GautengJobs #TechCareersSA
iSanqa is your trusted Level 2 BEE recruitment partner dedicated to continuous improvement in delivering exceptional service. Specializing in seamless placements for permanent staff temporary resources and efficient contract management and billing facilitation iSanqa Resourcing is powered by a team of professionals with an outstanding track record. With over 100 years of combined experience we are committed to evolving our practices to ensure ongoing excellence.
Our client is seeking an Advanced Data Engineer to join the Cloud Databases & Services team. This product group is focused on establishing central database management by implementing fully managed solutions and reference architectures across both on-premise and public cloud environments. You will ...
Our client is seeking an Advanced Data Engineer to join the Cloud Databases & Services team.
This product group is focused on establishing central database management by implementing fully managed solutions and reference architectures across both on-premise and public cloud environments.
You will be instrumental in building standardized templates and offering expert database services across a variety of cloud technologies.
If you are a cloud-data specialist who excels at standardization and building scalable database building blocks this is your next challenge.
4-6 Years Experience in Data Engineering
Hands-on Mastery of Python PySpark and AWS/OCI
Cloud Database Management & Reference Architectures
Position Details:
- Contract Start Date:
- Contract End Date:
- Location: Midrand/Menlyn/Rosslyn/Home Office Rotation
The Mission: You will be responsible for developing and implementing fully managed database solutions and standardized templates for global use.
Your role involves collaborating with cross-functional teams to design reference architectures that bridge the gap between on-premise systems and the public cloud.
You will act as a database expert providing specialized services and ensuring the sustainability of cloud data technologies.
Qualifications & Experience:
- Education: Relevant IT Computer Science or Engineering degree (Advanced degrees are advantageous).
- Experience: Minimum 3 5 years of demonstrated experience as a Data Engineer.
- Technical Core: Hands-on expertise in Python and PySpark is essential.
- Certifications: AWS Certified Cloud Practitioner Oracle Cloud certifications or relevant data engineering certifications are preferred.
Essential Skills (Verified):
- Strong experience with Python (Python 3.x) and PySpark for developing data processing jobs.
- At least 3 years experience with AWS services commonly used by data engineers such as Athena Glue Lambda S3 and ECS.
- Hands-on experience with NoSQL databases such as DynamoDB and relational databases (Oracle/PostgreSQL) including strong Oracle SQL skills.
- Experience with Oracle Cloud Infrastructure (OCI) services and tooling for databases storage and data processing.
- Expertise in data formats and schema design including Parquet AVRO JSON XML and CSV and technical data modelling ( not drag and drop ).
- ETL and data pipeline development experience including building pipelines with AWS Glue or similar platforms.
- Experience with containerization and orchestration technologies such as Docker (Kubernetes/OpenShift
- advantageous).
- Proficiency with scripting for automation (Bash PowerShell) and familiarity with Linux/Unix environments.
- Experience with data quality tooling and validation (e.g. Great Expectations) and performing thorough data testing and validation.
- Familiarity with cloud infrastructure as code and DevOps tools such as Terraform CloudFormation CI/CD pipelines Git and Jenkins.
Advantageous Skills:
- Knowledge of Kafka or other streaming technologies and AWS Kinesis for real-time data ingestion.
- Experience with AWS Redshift EMR and other analytics/warehouse technologies.
- Familiarity with GROUP Cloud Data Hub (CDH) or similar organizational cloud data blueprints.
- Java / JEE experience and understanding of Java application servers.
- Experience with monitoring and observability tools such as CloudWatch and Grafana.
- AWS solution architecture experience and certifications (e.g. AWS Certified Cloud Practitioner) are advantageous.
- Familiarity with REST APIs and building integrations with external systems.
- Experience with schema design for BI and data warehousing and preparing specifications for development.
- Experience with MongoDB or other NoSQL stores.
- Familiarity with Agile/Scrum delivery models and working within cross-functional teams.
Key Responsibilities:
- Design build and maintain scalable data pipelines and ETL workflows to ingest and transform data for analytics and reporting.
- Implement and optimize data storage solutions including data lakes and data warehouses on cloud platforms.
- Develop PySpark and Python applications for large-scale data processing and transformations.
- Ensure data quality consistency and integrity through testing validation and the use of data quality tools.
- Collaborate with stakeholders to translate business requirements into technical specifications and data models.
- Propose and review system and solution designs and evaluate technical alternatives.
- Maintain and operate cloud infrastructure and CI/CD pipelines for data platform components.
- Create and maintain technical documentation runbooks and artefacts for developed solutions.
- Support production troubleshooting monitoring and incident management for data services.
- Work closely with BI teams to prepare and optimize data for reporting tools such as Business Objects or Tableau.
- Coach and support fellow engineers and help improve team capability through knowledge sharing and training.
- Participate in Agile ceremonies and contribute to continuous improvement of delivery processes.
Important Application Details Location & Relocation Applicants based outside of Gauteng must be willing to relocate. Please note that relocation to the province will be at the candidates own cost.
Eligibility & Legal
- Citizenship: South African citizens and residents are preferred.
- Work Permits: Candidates with valid work permits will be considered.
- Privacy: By applying you consent to being added to our database and receiving updates until you unsubscribe.
Application Status If you do not receive a response within 2 weeks please consider your application unsuccessful.
#isanqa #isanqaresourcing #fuelledbypassionintegrityexcellence #DataEngineer #CloudDatabases #Python #PySpark #AWS #OracleCloud #CloudArchitecture #GautengJobs #TechCareersSA
iSanqa is your trusted Level 2 BEE recruitment partner dedicated to continuous improvement in delivering exceptional service. Specializing in seamless placements for permanent staff temporary resources and efficient contract management and billing facilitation iSanqa Resourcing is powered by a team of professionals with an outstanding track record. With over 100 years of combined experience we are committed to evolving our practices to ensure ongoing excellence.
View more
View less