Role Summary
We are seeking an experienced Databricks Engineer with strong expertise in data engineering big data processing and cloud platforms. The ideal candidate will have hands-on experience in Databricks PySpark and building scalable data pipelines in Azure or AWS environments.
Key Responsibilities
-
Design develop and optimize scalable data pipelines using Azure Databricks / AWS Databricks
-
Develop data transformation logic using PySpark / Spark SQL
-
Build and maintain ETL/ELT workflows for structured and unstructured data
-
Work with data lakes and cloud storage (Azure Data Lake / AWS S3)
-
Optimize Spark jobs for performance and scalability
-
Implement data quality checks and validation processes
-
Collaborate with Data Architects BI teams and business stakeholders
-
Support CI/CD pipelines and DevOps practices for data deployments
-
Troubleshoot production data issues and provide long-term solutions
Required Skills
-
7 years of Data Engineering experience
-
3 years of hands-on experience with Databricks
-
Strong experience in PySpark & Spark SQL
-
Experience with Azure Data Factory or AWS Glue
-
Experience working with Delta Lake
-
Strong SQL skills
-
Experience with cloud platforms (Azure or AWS)
-
Knowledge of data warehousing concepts
-
Experience in performance tuning Spark jobs
-
Familiarity with Git Jenkins Azure DevOps
Nice to Have
-
Databricks Certification
-
Experience with Snowflake
-
Knowledge of streaming (Kafka / Spark Streaming)
-
Experience with Terraform or Infrastructure as Code
-
Banking/Finance domain experience (preferred for NY/NJ roles)
Role Summary We are seeking an experienced Databricks Engineer with strong expertise in data engineering big data processing and cloud platforms. The ideal candidate will have hands-on experience in Databricks PySpark and building scalable data pipelines in Azure or AWS environments. Key Responsibil...
Role Summary
We are seeking an experienced Databricks Engineer with strong expertise in data engineering big data processing and cloud platforms. The ideal candidate will have hands-on experience in Databricks PySpark and building scalable data pipelines in Azure or AWS environments.
Key Responsibilities
-
Design develop and optimize scalable data pipelines using Azure Databricks / AWS Databricks
-
Develop data transformation logic using PySpark / Spark SQL
-
Build and maintain ETL/ELT workflows for structured and unstructured data
-
Work with data lakes and cloud storage (Azure Data Lake / AWS S3)
-
Optimize Spark jobs for performance and scalability
-
Implement data quality checks and validation processes
-
Collaborate with Data Architects BI teams and business stakeholders
-
Support CI/CD pipelines and DevOps practices for data deployments
-
Troubleshoot production data issues and provide long-term solutions
Required Skills
-
7 years of Data Engineering experience
-
3 years of hands-on experience with Databricks
-
Strong experience in PySpark & Spark SQL
-
Experience with Azure Data Factory or AWS Glue
-
Experience working with Delta Lake
-
Strong SQL skills
-
Experience with cloud platforms (Azure or AWS)
-
Knowledge of data warehousing concepts
-
Experience in performance tuning Spark jobs
-
Familiarity with Git Jenkins Azure DevOps
Nice to Have
-
Databricks Certification
-
Experience with Snowflake
-
Knowledge of streaming (Kafka / Spark Streaming)
-
Experience with Terraform or Infrastructure as Code
-
Banking/Finance domain experience (preferred for NY/NJ roles)
View more
View less