HM Note: This hybrid contract role is three (3) days in office. Candidates resume must include first and last name.
Description
Responsibilities and nbsp;
- Participate in product teams to analyze systems requirements architect design code and nbsp;and implement cloud-based data and analytics products that conform to standards and nbsp;
- Design create and maintain cloud-based data lake and lakehouse structures automated data pipelines analytics models and nbsp;
- Liaises with cluster IT colleagues to implement products conduct reviews resolve and nbsp;operational problems and support business partners in effective use of cloud-based and nbsp;data and analytics products. and nbsp;
- Analyze complex technical issues identify alternatives and recommend solutions. and nbsp;
- Support the migration of legacy data pipelines from Azure Synapse Analytics and Azure Data Factory (including stored procedures views used by BI teams and Parquet files in Azure Data Lake Storage (ADLS)) to modernized Databricks-based solutions leveraging Delta Lake and native orchestration capabilities and nbsp; and nbsp;
- Support the development of standards and a reusable framework that streamlines pipeline creation and nbsp;
- Participate in code reviews and prepare/conduct knowledge transfer to maintain code quality promote team knowledge sharing and enforce development standards across collaborative data projects. and nbsp;
General Skills and nbsp;
- Experience in multiple cloud-based data and analytics platforms and coding/programming/scripting tools to create maintain support and operate cloud-based and nbsp;data and analytics products with a preference for Microsoft Azure and nbsp;
- Experience with designing creating and maintaining cloud-based data lake and and nbsp;lakehouse structures automated data pipelines analytics models in real world implementations and nbsp;
- Strong background in building and orchestrating data pipelines using services like Azure Data Factory and Databricks and nbsp;
- Demonstrated ability to organize and manage data in a lakehouse following medallion architecture and nbsp;
- Background with Databricks Unity Catalog for governance is a plus and nbsp;
- Proficient in using Python and SQL for data engineering and analytics development and nbsp;
- Familiar with CI/CD practices and tools for automating deployment of data solutions and managing code lifecycle and nbsp;
- Comfortable conducting and participating in peer code reviews in GitHub to ensure quality consistency and best practices and nbsp;
- Experience in assessing client information technology needs and objectives and nbsp;
- Experience in problem-solving to resolve complex multi-component failures and nbsp;
- Experience in preparing knowledge transfer documentation and conducting knowledge and nbsp;transfer and nbsp;
- Experience working on an Agile team and nbsp;
and nbsp;
Desirable Skills and nbsp;
- Written and oral communication skills to participate in team meetings write/edit and nbsp;systems documentation prepare and present written reports on findings/alternate and nbsp;solutions develop guidelines / best practices and nbsp;
- Interpersonal skills to explain and discuss advantages and disadvantages of various and nbsp;approaches and nbsp;
Technology Stack and nbsp;
- Azure Storage Azure Data Lake Azure Databricks Lakehouse Azure Synapse Azure Databricks and nbsp;
- Python SQL and nbsp;
- PowerBI and nbsp;
- GitHub and nbsp;
and nbsp;
Must Haves:
- 5 years experience and nbsp;Azure environment and nbsp;
- 5 years experience and nbsp;Data engineering with ADF and Databricks and nbsp;
- 5 years experience and nbsp;Programming experience with Python SQL