Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailData Engineer
We are seeking a Azure Databricks Engineer with expertise in Apache Kafka to design and implement realtime and batch data processing solutions on Azure Cloud. The ideal candidate will have 8 years of experience in big data engineering streaming pipelines and cloudbased data warehousing to support enterprisescale analytics.
Key Responsibilities:
Develop and optimize big data pipelines using Azure Databricks (Spark Scala PySpark)
Design realtime streaming solutions with Confluent/Apache Kafka and Kafla Streams
Build and manage ETL/ELT workflows using ADF Delta Lake and Databricks
Ensure performance costefficiency and data security best practices
Implement CI/CD pipelines for data engineering workflows using Azure DevOps Terraform
Expertise in Azure Databricks Apache Spark and PySpark/Scala
Handson experience with Apache Kafka (Streams Confluent Kafka Kafka Connect)
Strong knowledge of Delta Lake and Medallion Architecture Proficiency in Azure Data Factory (ADF) Azure Data Lake and Azure Synapse
Experience with CI/CD Terraform and data security best practices
Mandatory Skill
Pega Platform
Pega CDH
Secondary Skill
Agile
Adobe Experience Platform
Adobe Target
Full-Time