Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailClient is seeking an experienced Databricks Engineer with strong expertise in big data platforms cloud technologies and data engineering practices. The ideal candidate will design develop and optimize data pipelines using Databricks and related ecosystems ensuring scalable high-performance solutions for business-critical applications.
Key Responsibilities:Design and implement scalable data pipelines and ETL workflows using Databricks (PySpark SparkSQL Delta Lake).
Work with Azure/AWS/GCP Databricks environments for data ingestion transformation and processing.
Optimize big data processing for performance scalability and cost-efficiency.
Collaborate with data architects analysts and business stakeholders to define data requirements and implement solutions.
Implement CI/CD pipelines for Databricks workflows notebooks and jobs.
Manage and optimize Delta Lake tables and ensure efficient storage and retrieval.
Integrate data pipelines with various sources such as APIs databases data lakes and streaming platforms (Kafka/Kinesis).
Ensure data quality governance and security across all data assets.
Troubleshoot and resolve issues in production pipelines.
Mentor junior engineers and contribute to best practices in data engineering.
7 years of professional experience in Data Engineering / Big Data.
Hands-on expertise in Databricks PySpark and SparkSQL.
Strong experience with Delta Lake and data lakehouse concepts.
Proficiency in Python/Scala/SQL for data engineering.
Experience with one or more cloud platforms (Azure Data Factory AWS Glue GCP Dataflow) integrated with Databricks.
Solid understanding of data modeling warehousing (Snowflake/Redshift/BigQuery) and ETL frameworks.
Strong knowledge of DevOps/CI-CD Git and Databricks Repos.
Experience with data security governance and compliance.
Familiarity with streaming technologies (Kafka Kinesis Event Hubs) is a plus.
Excellent communication and problem-solving skills.
Exposure to MLflow Feature Store or MLOps on Databricks.
Knowledge of containerization (Docker Kubernetes).
Experience in leading small teams or mentoring.
Full-time