drjobs RQ08913 - Software Developer - ETL - Senior

RQ08913 - Software Developer - ETL - Senior

Employer Active

1 Vacancy
The job posting is outdated and position may be filled
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

Toronto - Canada

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

Responsibilities:

  • Design technical solutions for data acquisition and storage into our centralized data repository.
  • Develop ELT scripts design datadriven logic and conduct unit testing.
  • Conduct database modeling and design as to improve overall performance.
  • Produce design artifacts and documentation which will allow future support of the implemented solutions.
  • Investigate and resolve incidents and identify whether the problem is caused by the data loading code or is due to bad data received from the data provider.
  • Execute service requests related to routine and adhoc data loads.
  • Provide the data quality check and report on the data quality issue.

SkillsExperience and Skill Set Requirements

Technical Skills

10 years experience in:

Designing and developing scalable Medallion Data Lakehouse architectures.

Expertise in data ingestion transformation and curation using Delta Lake and Databricks.

Experience integrating structured and unstructured data sources into star/snowflake schemas.

Building automating and optimizing complex ETL/ELT pipelines using Azure Data Factory (ADF) Databricks (PySpark SQL Delta Live Tables) and dbt.

Implementing orchestrated workflows and job scheduling in Azure environments.

Strong knowledge of relational (SQL Server Synapse PostgreSQL) and dimensional modeling.

Advanced SQL query optimization indexing partitioning and data replication strategies.

Experience with Apache Spark Delta Lake and distributed computing frameworks in Azure Databricks.

Working with Parquet ORC and JSON formats for optimized storage and retrieval.

Deep expertise in Azure Data Lake Storage (ADLS) Azure Synapse Analytics Azure SQL Event Hubs and Azure Functions.

Strong understanding of cloud security RBAC and data governance.

Proficiency in Python (PySpark) SQL and PowerShell for data engineering workflows.

Experience with CI/CD automation (Azure DevOps GitHub Actions) for data pipelines.

Implementing data lineage cataloging metadata management and data quality frameworks.

Experience with Unity Catalog for managing permissions in Databricks environments.

Expertise in Power BI (DAX data modeling performance tuning).

Experience in integrating Power BI with Azure Synapse and Databricks SQL Warehouses.

Familiarity with MLflow AutoML and embedding AIdriven insights into data pipelines.

50 points

Core Skill and Experience

10 years of experience with technical systems specifications and translating them into working tested applications for large complex mission critical applications.

10 years of experience in technical analysis program code detailed programming and reports specifications program design writing and /or generating code and conducting unit tests.

10 years of experience in software in various computing platforms operating systems database technology communication protocols middleware and gateways.

10 years of experience in developing and maintaining system design models technical documentation and specifications.

5 years of experience in conducting technical evaluation and assessment of options for technical design issues application figuration aspects and integration capabilities related tools and utilities gap analysis of integration components to technical requirements / specifications / documentation.

SDLC endtoend.

30 points

General Skills

Demonstrated strong leadership and people management skills.

Proven technical leadership skills with ability to identify areas for improvement and recommend solutions.

Exceptional analytical problem solving and decisionmaking skills.

Demonstrated strong interpersonal verbal and written communication and presentation skills.

Proven troubleshooting and critical thinking experience

Demonstrated ability to apply strong listening skills to facilitate issue resolution.

Effective consulting skills to engage with all stakeholders with proven track record for building strong working relationships.

Strong interpersonal facilitation and negotiation skills with ability to build rapport with stakeholders and drive negotiations to a successful outcome.

Excellent customer service skills including tact and diplomacy to ensure client needs are managed effectively.

A motivated flexible detailoriented and creative team player with perseverance excellent organization and multitasking abilities and a proven track record for meeting strict deadlines.

15 points

Public Sector/Healthcare Experience

Knowledge of Public Sector Enterprise Architecture artifacts (or similar) processes and practices and ability to produce technical documentation that comply with industry standard practices.

Knowledge of Project Management Institute (PMI) and Public Sector I&IT project management methodologies.

Knowledge and understanding of Ministry policy and IT project approval processes and requirements.

Experience with large complex IT Healthrelated projects.

5 points


MUST HAVES:

10 years experience in:

Designing and developing scalable Medallion Data Lakehouse architectures.

Expertise in data ingestion transformation and curation using Delta Lake and Databricks.

Experience integrating structured and unstructured data sources into star/snowflake schemas.

Building automating and optimizing complex ETL/ELT pipelines using Azure Data Factory (ADF) Databricks (PySpark SQL Delta Live Tables) and dbt.

Implementing orchestrated workflows and job scheduling in Azure environments.

Strong knowledge of relational (SQL Server Synapse PostgreSQL) and dimensional modeling.

Advanced SQL query optimization indexing partitioning and data replication strategies.

Experience with Apache Spark Delta Lake and distributed computing frameworks in Azure Databricks.

Working with Parquet ORC and JSON formats for optimized storage and retrieval.

Deep expertise in Azure Data Lake Storage (ADLS) Azure Synapse Analytics Azure SQL Event Hubs and Azure Functions.

Strong understanding of cloud security RBAC and data governance.

Proficiency in Python (PySpark) SQL and PowerShell for data engineering workflows.

Experience with CI/CD automation (Azure DevOps GitHub Actions) for data pipelines.

Implementing data lineage cataloging metadata management and data quality frameworks.

Experience with Unity Catalog for managing permissions in Databricks environments.

Expertise in Power BI (DAX data modeling performance tuning).

Experience in integrating Power BI with Azure Synapse and Databricks SQL Warehouses.

Employment Type

Full Time

Company Industry

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.