Senior Databricks Developer / Data Engineer
Minimum 5 Years Experience on Cloud Platforms (AWS Azure)
Position Overview
We are seeking a highly skilled and experienced Databricks Developer / Data Engineer to join our team. The ideal candidate will have a minimum of 5 years of hands-on experience working with Databricks on leading cloud platforms such as AWS and Azure. You will be responsible for designing building and optimizing scalable data pipelines collaborating with cross-functional teams and driving data-driven decision making across the organization.
Key Responsibilities
- Design develop and maintain scalable data pipelines and ETL processes using Databricks on AWS and Azure.
- Optimize Spark jobs for performance reliability and cost-efficiency.
- Integrate Databricks workflows with cloud-native storage solutions (S3 ADLS).
- Implement data quality validation and governance best practices.
- Collaborate with data scientists analysts and business stakeholders to understand requirements and deliver solutions.
- Troubleshoot and resolve issues in production data pipelines.
- Document technical solutions and create knowledge-sharing materials for the team.
Must Have Skills
- Databricks Platform Expertise: 5 years of hands-on experience developing solutions on Databricks including workspace management notebook development job scheduling and cluster configuration.
- Cloud Platform Experience: Proven experience in deploying and managing Databricks environments on AWS and Azure.
- Apache Spark: Advanced knowledge of Spark (PySpark/Scala/SQL) for data processing and analytics.
- ETL & Data Pipeline Development: Expertise in building robust ETL processes and data pipelines.
- Programming Languages: Proficiency in Python and/or Scala.
- Cloud Storage Integration: Experience integrating with S3 ADLS or similar cloud storage services.
- SQL Skills: Strong ability to write and optimize complex SQL queries.
- Data Modeling: Solid understanding of data modeling concepts and best practices.
- Version Control: Familiarity with Git or similar version control systems.
- Problem Solving: Excellent troubleshooting and analytical skills.
Good to Have Skills
- Experience with CI/CD pipelines and DevOps practices for data engineering.
- Knowledge of Delta Lake and advanced Databricks features (e.g. MLflow Databricks SQL).
- Experience with other cloud platforms (Google Cloud Platform etc.).
- Exposure to data visualization tools (Power BI Tableau etc.).
- Understanding of data security compliance and privacy standards.
- Hands-on experience with streaming data (Kafka Spark Streaming).
- Familiarity with REST APIs and data integration patterns.
- Ability to mentor junior team members and conduct code reviews.
- Experience with infrastructure as code (Terraform CloudFormation).
Qualifications
- Bachelors or Masters degree in Computer Science Engineering or related field.
- Minimum 5 years of experience in Databricks development on AWS and/or Azure.
- Strong communication skills and ability to work in a collaborative environment.
- Relevant certifications (Databricks AWS Azure) are a plus.
Location & Work Environment
Hybrid work options available. Occasional travel may be required for team meetings or project kick-offs.
Senior Databricks Developer / Data Engineer Minimum 5 Years Experience on Cloud Platforms (AWS Azure) Position Overview We are seeking a highly skilled and experienced Databricks Developer / Data Engineer to join our team. The ideal candidate will have a minimum of 5 years of hands-on experience wor...
Senior Databricks Developer / Data Engineer
Minimum 5 Years Experience on Cloud Platforms (AWS Azure)
Position Overview
We are seeking a highly skilled and experienced Databricks Developer / Data Engineer to join our team. The ideal candidate will have a minimum of 5 years of hands-on experience working with Databricks on leading cloud platforms such as AWS and Azure. You will be responsible for designing building and optimizing scalable data pipelines collaborating with cross-functional teams and driving data-driven decision making across the organization.
Key Responsibilities
- Design develop and maintain scalable data pipelines and ETL processes using Databricks on AWS and Azure.
- Optimize Spark jobs for performance reliability and cost-efficiency.
- Integrate Databricks workflows with cloud-native storage solutions (S3 ADLS).
- Implement data quality validation and governance best practices.
- Collaborate with data scientists analysts and business stakeholders to understand requirements and deliver solutions.
- Troubleshoot and resolve issues in production data pipelines.
- Document technical solutions and create knowledge-sharing materials for the team.
Must Have Skills
- Databricks Platform Expertise: 5 years of hands-on experience developing solutions on Databricks including workspace management notebook development job scheduling and cluster configuration.
- Cloud Platform Experience: Proven experience in deploying and managing Databricks environments on AWS and Azure.
- Apache Spark: Advanced knowledge of Spark (PySpark/Scala/SQL) for data processing and analytics.
- ETL & Data Pipeline Development: Expertise in building robust ETL processes and data pipelines.
- Programming Languages: Proficiency in Python and/or Scala.
- Cloud Storage Integration: Experience integrating with S3 ADLS or similar cloud storage services.
- SQL Skills: Strong ability to write and optimize complex SQL queries.
- Data Modeling: Solid understanding of data modeling concepts and best practices.
- Version Control: Familiarity with Git or similar version control systems.
- Problem Solving: Excellent troubleshooting and analytical skills.
Good to Have Skills
- Experience with CI/CD pipelines and DevOps practices for data engineering.
- Knowledge of Delta Lake and advanced Databricks features (e.g. MLflow Databricks SQL).
- Experience with other cloud platforms (Google Cloud Platform etc.).
- Exposure to data visualization tools (Power BI Tableau etc.).
- Understanding of data security compliance and privacy standards.
- Hands-on experience with streaming data (Kafka Spark Streaming).
- Familiarity with REST APIs and data integration patterns.
- Ability to mentor junior team members and conduct code reviews.
- Experience with infrastructure as code (Terraform CloudFormation).
Qualifications
- Bachelors or Masters degree in Computer Science Engineering or related field.
- Minimum 5 years of experience in Databricks development on AWS and/or Azure.
- Strong communication skills and ability to work in a collaborative environment.
- Relevant certifications (Databricks AWS Azure) are a plus.
Location & Work Environment
Hybrid work options available. Occasional travel may be required for team meetings or project kick-offs.
View more
View less