Data Engineer

Not Interested
Bookmark
Report This Job

profile Job Location:

Toronto - Canada

profile Monthly Salary: $ 75000 - 95000
Posted on: 6 days ago
Vacancies: 1 Vacancy

Job Summary

Career Opportunity

Role Title

Data Engineer

Purpose of role

The Data Engineer serves as a technical practitioner and individual contributor responsible for designing building and maintaining data pipelines data models and infrastructure that enable reliable and efficient data delivery across the organization. They work closely with data scientists analytics teams business stakeholders and platform engineers to understand data requirements and deliver high-quality well-documented data assets. The candidate will also collaborate with IT Security and Compliance teams to ensure data solutions adhere to security standards data governance policies and regulatory requirements.

This technical role will contribute across the full data engineering lifecycle including data ingestion transformation storage optimization and quality assurance. The candidate will build and maintain ETL/ELT processes using modern data stack technologies implement data modeling best practices and develop reusable frameworks that accelerate data delivery. They will apply software engineering principles to data development including version control code review unit testing and CI/CD practices to ensure data assets are production-ready and maintainable.

This role will help advance the organizations data engineering capabilities by staying current with emerging technologies tools and best practices. The candidate will contribute to the evaluation and adoption of cloud-native data platforms streaming architectures and data lakehouse patterns. They will also support AI/ML initiatives by building and maintaining data pipelines that feed machine learning workflows including feature engineering data preparation for model training and integration of model outputs into downstream data products and reporting systems.

Job Description

Key Responsibilities

Data Engineering

  • Develop and optimize high-performance end-to-end data transformation and reporting solutions using PySpark Spark SQL and T-SQL within cloud-native environments like Databricks
  • Architect and implement complex logical and physical data models that support modern patterns such as Data Lakehouse Data Mesh and Data Fabric
  • Construct robust ETL/ELT pipelines that facilitate seamless data ingestion and transformation from diverse sources into production-ready data assets
  • Build and maintain specialized data structures including feature stores and automated retraining pipelines to support the operationalization of machine learning models
  • Design test strategy test plan and test cases
  • Own and maintain complete and current assigned technical documentation
  • Collaborate with PMO on project estimation and resourcing
  • Participate in project risk assessment to identify potential obstacles ahead of time

AI-Driven ETL/ELT Automation

  • Leverage AI-accelerated development tools (e.g. Databricks Assistant GitHub Copilot Claude Code) to automate code generation unit testing and the refactoring of legacy ETL processes
  • Implement AI-powered automation for pipeline monitoring utilizing machine learning for anomaly detection and the development of self-healing data infrastructure
  • Utilize AI tools to accelerate metadata management and data lineage mapping ensuring that automated pipelines remain compliant with governance standards

Data Governance

  • Operationalize data quality and retention policies directly within data pipelines to ensure the integrity and security of the data lakehouse
  • Automate the capture of technical metadata and lineage as part of the standard engineering lifecycle to satisfy regulatory and compliance requirements

Key Qualifications

  • Education (minimum required): University graduate with a major in computer science or equivalent work experience
  • Experience (minimum required): 2 years of experience in data engineering and operations financial institution experience is an asset
  • Working knowledge of modern data architectures including the practical application and construction of Data Lakehouse Cloud Data Warehouse Data Mesh and Data Fabric environments
  • Hands-on experience in developing and optimizing complex ETL/ELT pipelines using PySpark Spark SQL and T-SQL within cloud-native environments such as Databricks Snowflake or Amazon Redshift
  • Technical proficiency in operationalizing machine learning workflows including the development of feature stores model serving layers and automated retraining pipelines
  • Experience with Agile work planning and CI/CD platforms (e.g. Azure DevOps GitHub Enterprise) to automate the deployment and validation of data assets
  • Expertise in leveraging AI-accelerated software development tools (e.g. GitHub Copilot Databricks Assistant Claude Code) to automate code generation accelerate unit testing and refactor legacy ETL logic
  • Experience applying AI/ML techniques to automate data engineering tasks such as metadata extraction schema mapping and the creation of self-healing data infrastructure
  • Proven track record of designing and implementing end-to-end data transformation and reporting solutions with a deep understanding of logical and physical data modeling
  • Experience with data analytics and reporting tools such as Power BI Micro Strategy and SAS
  • Hands-on experience implementing technical data governance including the automated enforcement of data quality rules metadata management and data retention policies within the pipeline code
  • Excellent verbal and written communication skills (e.g. developing business cases and delivering presentations to senior management)
  • Strong analytical and problem-solving skills
  • Well organized innovative with a high level of initiative
  • Detail oriented able to manage several complex processes and tasks with a high level of accuracy
  • Demonstrated ability to work independently and deal with changing priorities while meeting tight deadlines
  • Strong interpersonal skills with the ability to build relationships and work in a team environment

#LI-Hybrid

Salary Range:

$75000.00 - $95000.00

The actual base salary for this position will depend on several factors including job-related skills experience and addition to base pay eligible employees may participate in a discretionary variable incentive plan results are subject to both individual and company performance.

Please note that this posting is intended to fill an existing vacancy; however there may be instances where more than one vacancy is available for the same role.

Equal Opportunity Employment and Inclusion at Foresters Financial we are committed to sustaining an equal opportunity environment for all job applicants. We embrace Inclusion Diversity and Equity (IDE) as a core strategic objective for building strong innovative teams in which all our employees can show up wholly and authentically as themselves.

Foresters Financial strives to provide an accessible candidate experience for prospective employees with different abilities. If you anticipate needing any type of accommodations during the recruitment process please email in advance of your appointment.

Thank you for choosing Foresters. Only those candidates who will be selected for further consideration will be contacted by our Talent Acquisition Team.


Required Experience:

IC

Career OpportunityRole TitleData EngineerPurpose of roleThe Data Engineer serves as a technical practitioner and individual contributor responsible for designing building and maintaining data pipelines data models and infrastructure that enable reliable and efficient data delivery across the organiz...
View more view more

About Company

Company Logo

Foresters Financial stands out from the other financial services firms. We believe in our purpose – which is to enrich family and community well-being. It’s something our employees embrace because it allows us to make a difference at work and in our communities. Giving back is not a n ... View more

View Profile View Profile