Job Title: Data Engineer with Openshift exp.
Location: Mississauga ON
Job Type: Contract opportunity
Our challenge
We are seeking a skilled Data Engineer to design develop and maintain scalable data pipelines and platforms supporting critical banking operations. The ideal candidate will have expertise in Python PySpark SQL Snowflake Data Factory Data Lakes and OpenShift with a strong understanding of banking data and compliance requirements.
Responsibilities:
- Design and develop robust ETL/ELT data pipelines using Python PySpark and SQL to support banking applications and reporting needs.
- Build and optimize data models within Snowflake and other cloud data platforms to enable efficient analytics.
- Develop and manage data ingestion workflows using Azure Data Factory and other orchestration tools.
- Design and maintain scalable Data Lakes and Data Warehouses tailored for banking data assets.
- Deploy and operate data solutions within containerized environments such as OpenShift.
- Collaborate with data scientists analysts and business stakeholders to understand data requirements and deliver high-quality solutions.
- Ensure data security compliance and governance standards are integrated into all data solutions.
- Monitor troubleshoot and optimize data pipelines for performance and reliability.
- Incorporate automation and CI/CD practices to streamline deployment and updates.
Requirements:
- 7 years of experience as a Data Engineer or similar role in the banking or financial services domain.
- Strong proficiency in Python and PySpark for data processing and transformation.
- Solid experience with SQL and data modeling in Snowflake or similar cloud data platforms.
- Hands-on experience with cloud data integration tools such as Azure Data Factory AWS Glue or equivalent.
- Experience designing and implementing Data Lakes and data storage solutions.
- Knowledge of container orchestration platforms especially OpenShift or Kubernetes.
- Familiarity with data governance security standards and compliance requirements in banking.
- Experience working in Agile environments with CI/CD pipelines.
- Strong problem-solving skills and attention to detail.
- Excellent communication skills and ability to collaborate effectively across teams.
Thanks
Sanjay Kumar
Job Title: Data Engineer with Openshift exp. Location: Mississauga ON Job Type: Contract opportunity Our challenge We are seeking a skilled Data Engineer to design develop and maintain scalable data pipelines and platforms supporting critical banking operations. The ideal candidate will have experti...
Job Title: Data Engineer with Openshift exp.
Location: Mississauga ON
Job Type: Contract opportunity
Our challenge
We are seeking a skilled Data Engineer to design develop and maintain scalable data pipelines and platforms supporting critical banking operations. The ideal candidate will have expertise in Python PySpark SQL Snowflake Data Factory Data Lakes and OpenShift with a strong understanding of banking data and compliance requirements.
Responsibilities:
- Design and develop robust ETL/ELT data pipelines using Python PySpark and SQL to support banking applications and reporting needs.
- Build and optimize data models within Snowflake and other cloud data platforms to enable efficient analytics.
- Develop and manage data ingestion workflows using Azure Data Factory and other orchestration tools.
- Design and maintain scalable Data Lakes and Data Warehouses tailored for banking data assets.
- Deploy and operate data solutions within containerized environments such as OpenShift.
- Collaborate with data scientists analysts and business stakeholders to understand data requirements and deliver high-quality solutions.
- Ensure data security compliance and governance standards are integrated into all data solutions.
- Monitor troubleshoot and optimize data pipelines for performance and reliability.
- Incorporate automation and CI/CD practices to streamline deployment and updates.
Requirements:
- 7 years of experience as a Data Engineer or similar role in the banking or financial services domain.
- Strong proficiency in Python and PySpark for data processing and transformation.
- Solid experience with SQL and data modeling in Snowflake or similar cloud data platforms.
- Hands-on experience with cloud data integration tools such as Azure Data Factory AWS Glue or equivalent.
- Experience designing and implementing Data Lakes and data storage solutions.
- Knowledge of container orchestration platforms especially OpenShift or Kubernetes.
- Familiarity with data governance security standards and compliance requirements in banking.
- Experience working in Agile environments with CI/CD pipelines.
- Strong problem-solving skills and attention to detail.
- Excellent communication skills and ability to collaborate effectively across teams.
Thanks
Sanjay Kumar
View more
View less