Company:Willware Technologies
Title: Databricks Data Engineer
Experience: 4-7 Years
Location: Pune (IN-MH)
Role Summary
Hands-on engineer responsible for building optimizing and maintaining data pipelines in Databricks with a strong focus on cost-efficient processing and performance optimization.
Key Responsibilities
Develop and maintain ETL/ELT pipelines using Databricks (PySpark/SQL)
Optimize workloads for cost efficiency (cluster sizing scheduling auto-scaling)
Implement Delta Lake best practices (partitioning compaction Z-ordering)
Monitor troubleshoot and resolve pipeline failures and performance issues
Reduce unnecessary compute and storage usage (caching pruning tuning)
Collaborate with senior engineers and architects on optimization initiatives
Required Skills
Hands-on experience with Databricks and Apache Spark (PySpark/SQL)
Strong understanding of Delta Lake fundamentals
Experience in performance tuning and job optimization
Exposure to cloud platforms (AWS / Azure / GCP)
Good to Have
Experience with data orchestration tools (Airflow / ADF)
Basic knowledge of data modeling concepts
Required Skills:
Data
Company:Willware Technologies Title: Databricks Data EngineerExperience: 4-7 YearsLocation: Pune (IN-MH) Role SummaryHands-on engineer responsible for building optimizing and maintaining data pipelines in Databricks with a strong focus on cost-efficient processing and performance optimization. Key ...
Company:Willware Technologies
Title: Databricks Data Engineer
Experience: 4-7 Years
Location: Pune (IN-MH)
Role Summary
Hands-on engineer responsible for building optimizing and maintaining data pipelines in Databricks with a strong focus on cost-efficient processing and performance optimization.
Key Responsibilities
Develop and maintain ETL/ELT pipelines using Databricks (PySpark/SQL)
Optimize workloads for cost efficiency (cluster sizing scheduling auto-scaling)
Implement Delta Lake best practices (partitioning compaction Z-ordering)
Monitor troubleshoot and resolve pipeline failures and performance issues
Reduce unnecessary compute and storage usage (caching pruning tuning)
Collaborate with senior engineers and architects on optimization initiatives
Required Skills
Hands-on experience with Databricks and Apache Spark (PySpark/SQL)
Strong understanding of Delta Lake fundamentals
Experience in performance tuning and job optimization
Exposure to cloud platforms (AWS / Azure / GCP)
Good to Have
Experience with data orchestration tools (Airflow / ADF)
Basic knowledge of data modeling concepts
Required Skills:
Data
View more
View less