We are seeking an experienced ETL Developer with strong expertise in both on-premise and cloud-based data integration. The ideal candidate will have a solid background in SSIS and AWS Glue with a passion for optimizing data pipelines and modernizing legacy systems.
Location: Hybrid 1-Day a week
Clearance Requirement: Active Security Clearance required or must be eligible for Reliability Security Clearance
Responsibilities:
- Design develop and maintain ETL processes using SSIS and AWS Glue
- Reverse engineer legacy SQL code and migrate to modern data platforms
- Optimize and troubleshoot data pipelines for performance and reliability
- Apply data modeling and engineering best practices
- Conduct unit and system integration testing
- Document business rules and technical specifications clearly and thoroughly
Requirements
- Must have 5 years of hands-on experience with SSIS and SQL Stored Procedures
- Must have 2 years of experience developing AWS Glue data pipelines
- Must have strong SQL and Python programming skills
- Solid understanding of Agile methodologies
- Familiarity with QA processes including unit and integration testing
- Excellent problem-solving and analytical skills
- Strong communication and documentation abilities
- Ability to work independently and collaboratively in a fast-paced environment
5+ years of experience in data quality assurance and testing, including developing and executing functional test cases, validating data pipelines, and coordinating deployments from development to production environments. Has supported at least one Enterprise/Government Organization with Big Data platforms and tools, such as Hadoop (HDFS, Pig, Hive, Spark), Big SQL, NoSQL, and Scala, ideally within cloud-based environments. 3+ data analysis and modeling projects, including working with structured and unstructured databases, building automated data quality pipelines, and collaborating with data engineers and architects to ensure high data integrity. Experience developing and executing test cases for Big Data pipelines, with deployments across dev, test, and production environments. Strong SQL skills for validation, troubleshooting, and data profiling. Applied knowledge of Big Data platforms including Hadoop (HDFS, Hive, Pig), Spark, BigSQL, NoSQL, Scala. Familiarity with cloud data ingestion and integration methods. Experience working with structured and unstructured data formats. Understanding of data modeling, data structures, and use-case-driven design. Experience in test automation for data validation pipelines is a strong asset. Prior experience with Genesys Cloud testing is a plus. Exposure to Tableau or other BI tools is beneficial. Hybrid role: 2 days/week onsite in North Vancouver