Role Azure Data Engineer
Location Phoenix AZ (Only Local)
Lon Term Contract
Overview:
The client is seeking a highly skilled Data Engineer with extensive experience in Azure Databricks and data engineering. This person will be responsible for designing developing and maintaining data pipelines ensuring data quality and optimizing code for performance. They should have strong experience with Azurebased services particularly Data Lake Storage and SQL Data Warehouse and be able to handle complex data engineering tasks. The ideal candidate should be able to work independently communicate effectively with stakeholders and deliver highquality results within set timelines. Additionally the role requires expertise in data governance data modeling and troubleshooting technical issues along with strong attention to detail and a problemsolving mindset.
Key Responsibilities:
- Business Requirements Interpretation: The candidate must be able to interpret business requirements and work closely with both internal teams and external application vendors.
- Data Modeling & Quality: Designing developing and maintaining data models and data quality rules to meet business needs.
- Troubleshooting: Addressing and resolving datarelated issues ensuring high data quality and integrity.
- Optimizing Data Code: Reviewing and optimizing PySpark/Python code for better performance including SQL queries and scripts.
- Documentation & Code Maintenance: Writing clean efficient and welldocumented code. Maintenance of custom code and processes.
- Collaboration & Communication: Collaborating with stakeholders and teams to solve datarelated issues and delivering training and release notes to endusers.
- Quality Assurance: Ensuring all tasks adhere to service level agreements (SLA) and turnaround times (TAT) including performing quality checks.
Key Skills & Qualifications:
- Experience with Databricks and Azure: At least 8 years of data engineering experience with expertise in Azure Databricks MSSQL LakeFlow Python and other Azure technologies.
- Data Pipeline Development: Designing and building scalable data management systems using Azure Databricks including creating data processing pipelines and ETL workflows.
- Integration Skills: Experience integrating Databricks with other Azure services like Azure Data Lake Storage and Azure SQL Data Warehouse.
- SQL & PySpark Proficiency: Strong experience with Spark SQL PySpark and writing optimized SQL queries and Python scripts for data processing.
- Data Governance & Data Models: Designing and implementing data models schemas and data governance in the Databricks environment.
- Data Warehousing & Data Quality: Indepth understanding of data warehousing concepts and experience implementing data quality rules using Databricks and tools like IDQ (Informatica Data Quality).
- Analytical & ProblemSolving Skills: Strong organizational analytical and problemsolving skills with a keen attention to detail.
- Communication Skills: Excellent communication skills to engage with both technical and nontechnical stakeholders effectively.
- Independent Work & Team Collaboration: Ability to work independently with minimal supervision while also collaborating well within teams.