Not Interested
Bookmark
Report This Job

profile Job Location:

Bengaluru - India

profile Monthly Salary: Not Disclosed
Posted on: 20 hours ago
Vacancies: 1 Vacancy

Job Summary

Key Responsibilities:
- Develop and maintain data pipelines using Databricks and Pyspark to ensure efficient data processing and transformation.
- Implement ETL processes to extract transform and load data from various sources into the AWS/ Azure environment.
- Utilize Unity Catalogue to manage and govern data assets and ensure data quality and security.
- Collaborate with data analysts data scientists and other stakeholders to understand data requirements and deliver actionable insights.
- Write efficient and optimized queries using SQL to extract and manipulate data from various sources.
- Monitor and troubleshoot data pipeline performance implement improvements and provide support for data-related issues.
- Document data engineering processes architecture and workflows to ensure knowledge sharing within the team.

Qualifications:
- Bachelor’s degree in Computer Science Data Science or a related field.
- Proven experience as a Data Engineer or in a similar role with a strong understanding of data engineering principles.
- Proficiency in Databricks Unity Catalogue Pyspark Python and SQL.
- Experience with data modeling ETL processes and data warehousing concepts.
- Strong problem-solving skills and the ability to work collaboratively in a team environment.
- Excellent communication skills to convey technical concepts to non-technical stakeholders.

What We Offer:
- Competitive salary and benefits package.
- Opportunities for professional development and career growth.
- A collaborative and innovative work environment.
- The chance to work with cutting-edge technologies in the data engineering field.


Required Skills:

Databricks Unity Catalog Pyspark Python SQL

Key Responsibilities:- Develop and maintain data pipelines using Databricks and Pyspark to ensure efficient data processing and transformation.- Implement ETL processes to extract transform and load data from various sources into the AWS/ Azure environment. - Utilize Unity Catalogue to manage and g...
View more view more

Key Skills

  • Apache Hive
  • S3
  • Hadoop
  • Redshift
  • Spark
  • AWS
  • Apache Pig
  • NoSQL
  • Big Data
  • Data Warehouse
  • Kafka
  • Scala