drjobs Data bricks engineer

Data bricks engineer

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

Bangalore - India

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

Pls align on OUR payroll as first priority Second option is Client Hiring Job Description:

Client is seeking an experienced Databricks Engineer with strong expertise in big data platforms cloud technologies and data engineering practices. The ideal candidate will design develop and optimize data pipelines using Databricks and related ecosystems ensuring scalable high-performance solutions for business-critical applications.

Key Responsibilities:
  • Design and implement scalable data pipelines and ETL workflows using Databricks (PySpark SparkSQL Delta Lake).

  • Work with Azure/AWS/GCP Databricks environments for data ingestion transformation and processing.

  • Optimize big data processing for performance scalability and cost-efficiency.

  • Collaborate with data architects analysts and business stakeholders to define data requirements and implement solutions.

  • Implement CI/CD pipelines for Databricks workflows notebooks and jobs.

  • Manage and optimize Delta Lake tables and ensure efficient storage and retrieval.

  • Integrate data pipelines with various sources such as APIs databases data lakes and streaming platforms (Kafka/Kinesis).

  • Ensure data quality governance and security across all data assets.

  • Troubleshoot and resolve issues in production pipelines.

  • Mentor junior engineers and contribute to best practices in data engineering.

Required Skills & Experience:
  • 7 years of professional experience in Data Engineering / Big Data.

  • Hands-on expertise in Databricks PySpark and SparkSQL.

  • Strong experience with Delta Lake and data lakehouse concepts.

  • Proficiency in Python/Scala/SQL for data engineering.

  • Experience with one or more cloud platforms (Azure Data Factory AWS Glue GCP Dataflow) integrated with Databricks.

  • Solid understanding of data modeling warehousing (Snowflake/Redshift/BigQuery) and ETL frameworks.

  • Strong knowledge of DevOps/CI-CD Git and Databricks Repos.

  • Experience with data security governance and compliance.

  • Familiarity with streaming technologies (Kafka Kinesis Event Hubs) is a plus.

  • Excellent communication and problem-solving skills.

Good to Have:
  • Exposure to MLflow Feature Store or MLOps on Databricks.

  • Knowledge of containerization (Docker Kubernetes).

  • Experience in leading small teams or mentoring.

Employment Type

Full-time

Company Industry

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.