- 12 month contract opportunity
- Start ASAP
- Location Sydney (hybrid)
- Opportunity to work with one of the largest and most advanced Data Engineering teams in the country
As a Senior Data engineer with expertise in software development / programming and a passion for building datadriven solutions youre ahead of trends and work at the forefront of Big Data and Data warehouse technologies.
We are seeking people who are:
- Passionate about building next generation data platforms and data pipeline solution across the bank.
- Ready to execute stateoftheart coding practices driving high quality outcomes to solve core business objectives and minimise risks.
- Capable to create both technology blueprints and engineering roadmaps for a multi year data transformational journey.
- Are experienced in providing data driven solutions that source data from various enterprise data platform into Cloudera Hadoop Big Data environment using technologies like Spark MapReduce Hive Sqoop Kafka; transform and process the source data to produce data
assets; and transform and egression to other data platforms like Teradata or RDBMS system. - Are experienced in building effective and efficient Big Data and Data Warehouse frameworks capabilities and features using common programming language (Scala Java or Python) with proper data quality assurance and security controls.
- Are experienced in designing building and delivering optimised enterprisewide data ingestion data integration and data pipeline solutions for Big Data and Data warehouse platforms.
- Are confident in building group data products or data assets from scratch by integrating large sets of data derived from hundreds of internal and external sources.
- Can lead and mentor other data engineers in a project work or initiative.
- Have experience and responsible for data security and data management.
Technical skills
Experience in designing building and delivering enterprisewide data ingestion data integration and data pipeline solutions using common programming language (Scala Java or Python) in a Big Data and Data Warehouse platform. Preferably with at least 5 years of handson experience in a Data Engineering role.
Experience in building data solution in Hadoop platform using Spark MapReduce Sqoop Kafka and various ETL frameworks for distributed data storage and processing. Preferably with at least 5 years of handson experience.
Strong Unix/Linux Shell scripting and programming skills in Scala Java or Python.
Proficient in SQL scripting writing complex SQLs for building data pipelines.
Experience in leading and mentoring data engineers including ownership of internal business stakeholder relationships and working with consultants.
Experience in working in Agile teams including working closely with internal business stakeholders.
Familiarity with data warehousing and/or data mart build experience in Teradata Oracle or RDBMS system is a plus.
Certification on Cloudera CDP Hadoop Spark Teradata AWS Ab Initio is a plus.
Experience in Ab Initio software products (GDE Co>Operating System Express>It etc.) is a plus.
Experience in AWS technology (EMR Redshift DocumentDB S3 etc.) is a plus.