Azure data bricks (SQL) 5 to 9 yrs Hyderabad Hybrid

Not Interested
Bookmark
Report This Job

profile Job Location:

Kurnool - India

profile Monthly Salary: Not Disclosed
Posted on: 30+ days ago
Vacancies: 1 Vacancy

Job Summary

Role: Azure data bricks & SQL

Exp: 5 to 9 years

Location: hyderabad

Hybrid work model

Shift: 3 PM to 12 am

 

Responsibilities: 

Develop maintain and optimize ETL/ELT pipelines using Azure Databricks (PySpark/Spark SQL).

Write and optimize complex SQL queries stored procedures triggers and functions in Microsoft SQL Server.

Design and build scalable metadata-driven ingestion pipelines for both batch and streaming datasets.

Perform data integration and harmonization across multiple structured and unstructured data sources.

Implement orchestration scheduling exception handling and log monitoring for robust pipeline management.

Collaborate with peers to evaluate and select appropriate tech stack and tools.

Work closely with business consulting data science and application development teams to deliver analytical solutions within timelines.

Support performance tuning troubleshooting and debugging of Databricks jobs and SQL queries.

Work with other Azure services such as Azure Data Factory Azure Data Lake Synapse Analytics Event Hub Cosmos DB Streaming Analytics and Purview when required. Support BI and Data Science teams in consuming data securely and in compliance with governance standards.


Qualifications :

BE in IT or equivalent


Additional Information :

Must have::

59 years of overall IT experience with at least 4 years in Big Data Engineering on Microsoft Azure.

Proficiency in Microsoft SQL Server (T-SQL) stored procedures indexing optimization and performance tuning.

Strong experience with Azure Data Factory (ADF) Databricks ADLS PySpark and Azure SQL Database.

Working knowledge of Azure Synapse Analytics Event Hub Streaming Analytics Cosmos DB and Purview.

Proficiency in SQL Python and either Scala or Java with debugging and performance optimization skills. Hands-on experience with big data technologies such as Hadoop Spark Airflow NiFi Kafka Hive Neo4J and Elastic

Search. Strong understanding of file formats such as Delta Lake Avro Parquet JSON and CSV.

Solid background in data modeling data transformation and data governance best practices.

Ability to work with large and complex datasets ensuring data quality governance and security standards.


Remote Work :

No


Employment Type :

Full-time

Role: Azure data bricks & SQLExp: 5 to 9 yearsLocation: hyderabadHybrid work modelShift: 3 PM to 12 am Responsibilities: Develop maintain and optimize ETL/ELT pipelines using Azure Databricks (PySpark/Spark SQL).Write and optimize complex SQL queries stored procedures triggers and functions in Micro...
View more view more

Key Skills

  • Accomodation
  • Database
  • Information Technology Sales
  • Insurance Paralegal

About Company

Hiring for our leading IT client 

View Profile View Profile