Hi
10 years of experience in IT industry.
Job Title: Sr. Databricks Engineer
Location: Remote
Duration: 12 Months Contract
We have below longterm job opening.
If you are interested Please send your updated resume with below details.
Your current location:
Visa status:
Availability:
Expected rate all inc on c2c /1099 :
Job Summary:
We are seeking a highly skilled Sr. Databricks Engineer to design develop and optimize scalable big data and analytics solutions. The ideal candidate will have extensive experience with Databricks Spark cloud-based data platforms and modern ETL/ELT frameworks. This role requires strong expertise in building high-performing data pipelines supporting enterprise analytics and collaborating with cross-functional teams in a remote environment.
Must Have Technical/Functional Skills
10 years of overall experience in data engineering or related fields
4 years of hands-on experience with Databricks and Apache Spark
Strong proficiency in PySpark SQL and performance tuning
Experience with ETL/ELT pipeline development and orchestration
Expertise in Delta Lake data modeling and optimization
Strong experience with cloud platforms such as Azure AWS or GCP
Familiarity with Python or Scala for data engineering tasks
Experience with CI/CD pipelines and DevOps practices
Strong analytical problem-solving and communication skills
Roles & Responsibilities
Design develop and maintain scalable ETL/ELT pipelines using Databricks PySpark and Spark SQL
Build and optimize Delta Lake architectures and data workflows
Develop reusable frameworks for ingestion transformation and validation
Collaborate with data architects analysts and business stakeholders to deliver data solutions
Optimize Spark jobs and SQL queries for performance and scalability
Implement data quality monitoring logging and alerting mechanisms
Develop and maintain CI/CD pipelines for Databricks notebooks jobs and workflows
Work with cloud-based storage and compute services
Support production deployments troubleshooting and incident resolution
Ensure security governance and compliance standards are followed
Required Skills & Experience
Strong hands-on experience with Databricks Apache Spark and PySpark
Experience with Azure Data Factory AWS Glue or similar orchestration tools
Expertise in SQL query optimization and large-scale data processing
Experience with data lakes data warehouses and modern analytics platforms
Knowledge of Git Jenkins Terraform or similar DevOps tools
Familiarity with Agile and Scrum methodologies
Ability to work independently in a remote setup
Nice to Have
Databricks Certification
Experience with Power BI Tableau or other visualization tools
Knowledge of streaming technologies such as Kafka or Spark Streaming
Exposure to machine learning workflows and MLOps
Experience in insurance banking healthcare or retail domains
U.S. Citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this time.
Thanks & Regards
Girish Kumar
Additional Information :
All your information will be kept confidential according to EEO guidelines.
Remote Work :
Yes
Employment Type :
Contract
Hi 10 years of experience in IT industry. Job Title: Sr. Databricks EngineerLocation: RemoteDuration: 12 Months Contract We have below longterm job opening.If you are interested Please send your updated resume with below details. Your current location:Visa status:Availability:Expected rate all in...
Hi
10 years of experience in IT industry.
Job Title: Sr. Databricks Engineer
Location: Remote
Duration: 12 Months Contract
We have below longterm job opening.
If you are interested Please send your updated resume with below details.
Your current location:
Visa status:
Availability:
Expected rate all inc on c2c /1099 :
Job Summary:
We are seeking a highly skilled Sr. Databricks Engineer to design develop and optimize scalable big data and analytics solutions. The ideal candidate will have extensive experience with Databricks Spark cloud-based data platforms and modern ETL/ELT frameworks. This role requires strong expertise in building high-performing data pipelines supporting enterprise analytics and collaborating with cross-functional teams in a remote environment.
Must Have Technical/Functional Skills
10 years of overall experience in data engineering or related fields
4 years of hands-on experience with Databricks and Apache Spark
Strong proficiency in PySpark SQL and performance tuning
Experience with ETL/ELT pipeline development and orchestration
Expertise in Delta Lake data modeling and optimization
Strong experience with cloud platforms such as Azure AWS or GCP
Familiarity with Python or Scala for data engineering tasks
Experience with CI/CD pipelines and DevOps practices
Strong analytical problem-solving and communication skills
Roles & Responsibilities
Design develop and maintain scalable ETL/ELT pipelines using Databricks PySpark and Spark SQL
Build and optimize Delta Lake architectures and data workflows
Develop reusable frameworks for ingestion transformation and validation
Collaborate with data architects analysts and business stakeholders to deliver data solutions
Optimize Spark jobs and SQL queries for performance and scalability
Implement data quality monitoring logging and alerting mechanisms
Develop and maintain CI/CD pipelines for Databricks notebooks jobs and workflows
Work with cloud-based storage and compute services
Support production deployments troubleshooting and incident resolution
Ensure security governance and compliance standards are followed
Required Skills & Experience
Strong hands-on experience with Databricks Apache Spark and PySpark
Experience with Azure Data Factory AWS Glue or similar orchestration tools
Expertise in SQL query optimization and large-scale data processing
Experience with data lakes data warehouses and modern analytics platforms
Knowledge of Git Jenkins Terraform or similar DevOps tools
Familiarity with Agile and Scrum methodologies
Ability to work independently in a remote setup
Nice to Have
Databricks Certification
Experience with Power BI Tableau or other visualization tools
Knowledge of streaming technologies such as Kafka or Spark Streaming
Exposure to machine learning workflows and MLOps
Experience in insurance banking healthcare or retail domains
U.S. Citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this time.
Thanks & Regards
Girish Kumar
Additional Information :
All your information will be kept confidential according to EEO guidelines.
Remote Work :
Yes
Employment Type :
Contract
View more
View less