Senior Data Analyst Near Real Time Analytics (Contract) GautengHybrid ISB8501551

ISanqa Resourcing

Not Interested
Bookmark
Report This Job

profile Job Location:

Midrand - South Africa

profile Monthly Salary: Not Disclosed
Posted on: 7 hours ago
Vacancies: 1 Vacancy

Job Summary

Engineer near real-time analytics platforms that power global production quality monitoring!

Build scalable data pipelines with Apache Spark Databricks and Kafka delivering mission-critical insights across worldwide manufacturing operations!

Drive end-to-end data engineering solutions where your expert skills in PySpark Delta Lake and Azure will transform raw streaming data into trusted insights that optimize production quality globally!

Expert data engineering with Apache Spark Databricks and Delta Lake

Hybrid and remote working flexibility with 1960 flexible annual hours

Architectural role with global production quality impact

POSITION: Contract: 01 January 2026 31 December 2028

EXPERIENCE: 6-8 Years related experience

COMMENCEMENT: 01 January 2026

LOCATION: Hybrid: Midrand/Menlyn/Rosslyn/Home Office rotation

TEAM: Near Real Time Analytics (NRTA)

Near Real Time Analytics (NRTA) consumes data from various sources esp. Kafka streams in order to provide near real time visualizations (dashboards) for end users as well as triggering warnings based on simple rules or machine learning model inference. NRTA focusses on supporting all plants and productions lines all over the world in the domain of production quality.

Qualifications / Experience

  • Bachelors or Masters degree in Computer Science Data Engineering Information Systems or a related field
  • 3 years of hands-on data engineering experience

Essential Skills Requirements

  • Expertise with Apache Spark (PySpark) Databricks notebooks Delta Lake and SQL
  • Strong programming skills in Python for data processing
  • Experience with cloud data platforms (Azure) and their Databricks offerings; familiarity with object storage (ADLS)
  • Proficient in building and maintaining ETL/ELT pipelines data modeling and performance optimization
  • Knowledge of data governance data quality and data lineage concepts
  • Experience with CI/CD for data pipelines and orchestration tools (GitHub Actions Asset Bundles or Databricks jobs)
  • Strong problem-solving skills attention to detail and ability to work in a collaborative cross-functional team

Advantageous Skills Requirements

  • Experience with streaming data (Structured Streaming Kafka Delta Live Tables)
  • Familiarity with materialized views streaming tables data catalogs and metadata management
  • Knowledge of data visualization and BI tools (Splunk Power BI Grafana)
  • Experience with data security frameworks and compliance standards relevant to the industry
  • Certifications in Databricks or cloud provider platforms

Role Requirements

  • Our client is seeking a hands-on Data Engineer with strong experience in building scalable data pipelines and analytics solutions on Databricks.
  • You will design implement and maintain end-to-end data flows optimize performance and collaborate with data scientists analytics and business stakeholders to turn raw data into trusted insights.

Key Responsibilities:

  • Design develop test and maintain robust data pipelines and ETL/ELT processes on Databricks (Delta Lake Spark SQL Python/Scala/SQL notebooks)
  • Architect scalable data models and data vault/ dimensional schemas to support reporting BI and advanced analytics
  • Implement data quality lineage and governance practices; monitor data quality metrics and resolve data issues proactively
  • Collaborate with Data Platform Engineers to optimize cluster configuration performance tuning and cost management in cloud environments (Azure Databricks)
  • Build and maintain data ingestion from multiple sources (RDBMS SaaS apps files streaming queues) using modern data engineering patterns (CDC event-driven pipelines change streams Lakeflow Declarative Pipelines)
  • Ensure data security and compliance (encryption access controls) in all data pipelines
  • Develop and maintain CI/CD pipelines for data workflows; implement versioning testing and automated deployments
  • Partner with data scientists and analysts to provision clean data notebooks and reusable data products; support feature stores and model deployment pipelines where applicable
  • Optimize Spark jobs for speed and cost; implement job scheduling monitoring and alerting
  • Document data lineage architecture and operational runbooks; participate in architectural reviews and best-practice governance
  • Preparing and performing data analysis
  • Introducing data and machine learning models
  • Developing data visualization
  • Implementing data and machine learning methods
  • Processing use cases to answer business-relevant questions
  • Transferring data and machine learning models to the appropriate infrastructures
  • Evaluating and continuously updating the technological market development
  • Developing and updating reusable technological solution blocks (building blocks) as well as integrating them into existing data infrastructures
  • Deriving and advising on technology-specific qualification requirements
  • Using programming languages and data visualization tools to prepare data extract knowledge from the data (data analytics) and automate decisions (AI)
  • Supporting with the identification of data-driven use cases and incorporate them in the value creation process
  • Collecting and processing adequate data (quality and quantity) for further use in the Group data ecosystem
  • Complying with Group standards

NB:

  • South African citizens/residents preferred.
  • Valid work permit holders will be considered.
  • By applying you consent to be added to the database and to receive updates until you unsubscribe.
  • If you do not receive a response within 2 weeks please consider your application unsuccessful.

#isanqa #DataEngineer #Expert #ApacheSpark #Databricks #DeltaLake #Azure #Kafka #RealTimeAnalytics #ITHub #NowHiring #PySpark #MachineLearning #fuelledbypassionintegrityexcellence

iSanqa is your trusted Level 2 BEE recruitment partner dedicated to continuous improvement in delivering exceptional service. Specializing in seamless placements for permanent staff temporary resources and efficient contract management and billing facilitation iSanqa Resourcing is powered by a team of professionals with an outstanding track record. With over 100 years of combined experience we are committed to evolving our practices to ensure ongoing excellence.

Engineer near real-time analytics platforms that power global production quality monitoring! Build scalable data pipelines with Apache Spark Databricks and Kafka delivering mission-critical insights across worldwide manufacturing operations! Drive end-to-end data engineering solutions where your ex...
View more view more

Key Skills

  • Databases
  • Data Analytics
  • Microsoft Access
  • SQL
  • Power BI
  • R
  • Tableau
  • Data Management
  • Data Mining
  • SAS
  • Data Analysis Skills
  • Analytics